no code implementations • 16 May 2021 • Vipul Gupta, Avishek Ghosh, Michal Derezinski, Rajiv Khanna, Kannan Ramchandran, Michael Mahoney
To enhance practicability, we devise an adaptive scheme to choose L, and we show that this reduces the number of local iterations in worker machines between two model synchronizations as the training proceeds, successively refining the model quality at the master.
no code implementations • NeurIPS 2020 • Michal Derezinski, Rajiv Khanna, Michael W. Mahoney
The Column Subset Selection Problem (CSSP) and the Nystrom method are among the leading tools for constructing small low-rank approximations of large datasets in machine learning and scientific computing.
1 code implementation • NeurIPS 2020 • Daniele Calandriello, Michal Derezinski, Michal Valko
Determinantal point processes (DPPs) are a useful probabilistic model for selecting a small diverse subset out of a large collection of items, with applications in summarization, recommendation, stochastic optimization, experimental design and more.
no code implementations • NeurIPS 2014 • Michal Derezinski, Manfred K. Warmuth
We conjecture that our hardness results hold for any training algorithm that is based on the squared Euclidean distance regularization (i. e. Back-propagation with the Weight Decay heuristic).