1 code implementation • 28 Apr 2022 • Boxiang Lyu, Filip Hanzely, Mladen Kolar
We consider the problem of personalized federated learning when there are known cluster structures within users.
2 code implementations • 14 Jul 2021 • Jianyu Wang, Zachary Charles, Zheng Xu, Gauri Joshi, H. Brendan McMahan, Blaise Aguera y Arcas, Maruan Al-Shedivat, Galen Andrew, Salman Avestimehr, Katharine Daly, Deepesh Data, Suhas Diggavi, Hubert Eichner, Advait Gadhikar, Zachary Garrett, Antonious M. Girgis, Filip Hanzely, Andrew Hard, Chaoyang He, Samuel Horvath, Zhouyuan Huo, Alex Ingerman, Martin Jaggi, Tara Javidi, Peter Kairouz, Satyen Kale, Sai Praneeth Karimireddy, Jakub Konecny, Sanmi Koyejo, Tian Li, Luyang Liu, Mehryar Mohri, Hang Qi, Sashank J. Reddi, Peter Richtarik, Karan Singhal, Virginia Smith, Mahdi Soltanolkotabi, Weikang Song, Ananda Theertha Suresh, Sebastian U. Stich, Ameet Talwalkar, Hongyi Wang, Blake Woodworth, Shanshan Wu, Felix X. Yu, Honglin Yuan, Manzil Zaheer, Mi Zhang, Tong Zhang, Chunxiang Zheng, Chen Zhu, Wennan Zhu
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data, motivated by and designed for privacy protection.
1 code implementation • 19 Feb 2021 • Filip Hanzely, Boxin Zhao, Mladen Kolar
We investigate the optimization aspects of personalized Federated Learning (FL).
no code implementations • NeurIPS 2021 • Mher Safaryan, Filip Hanzely, Peter Richtárik
In order to further alleviate the communication burden inherent in distributed optimization, we propose a novel communication sparsification strategy that can take full advantage of the smoothness matrices associated with local losses.
no code implementations • 3 Nov 2020 • Eduard Gorbunov, Filip Hanzely, Peter Richtárik
We present a unified framework for analyzing local SGD methods in the convex and strongly convex regimes for distributed/federated training of supervised machine learning models.
no code implementations • NeurIPS 2021 • Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik
Our first contribution is establishing the first lower bounds for this formulation, for both the communication complexity and the local oracle complexity.
no code implementations • 26 Aug 2020 • Filip Hanzely
With the increase of the volume of data and the size and complexity of the statistical models used to formulate these often ill-conditioned optimization tasks, there is a need for new efficient algorithms able to cope with these challenges.
no code implementations • ICML 2020 • Filip Hanzely, Nikita Doikov, Peter Richtárik, Yurii Nesterov
In this paper, we propose a new randomized second-order optimization algorithm---Stochastic Subspace Cubic Newton (SSCN)---for minimizing a high dimensional convex function $f$.
no code implementations • ICML 2020 • Filip Hanzely, Dmitry Kovalev, Peter Richtarik
We propose an accelerated version of stochastic variance reduced coordinate descent -- ASVRCD.
no code implementations • 10 Feb 2020 • Filip Hanzely, Peter Richtárik
We propose a new optimization formulation for training federated learning models.
no code implementations • 25 Sep 2019 • Sélim Chraibi, Adil Salim, Samuel Horváth, Filip Hanzely, Peter Richtárik
Preconditioning an minimization algorithm improve its convergence and can lead to a minimizer in one iteration in some extreme cases.
no code implementations • 27 May 2019 • Filip Hanzely, Peter Richtárik
We propose a remarkably general variance-reduced method suitable for solving regularized empirical risk minimization problems with either a large number of training examples, or a large model dimension, or both.
no code implementations • 27 May 2019 • Eduard Gorbunov, Filip Hanzely, Peter Richtárik
In this paper we introduce a unified analysis of a large family of variants of proximal stochastic gradient descent ({\tt SGD}) which so far have required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities.
no code implementations • 25 May 2019 • Aritra Dutta, Filip Hanzely, Jingwei Liang, Peter Richtárik
The best pair problem aims to find a pair of points that minimize the distance between two disjoint sets.
no code implementations • 27 Jan 2019 • Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko
In this work we present a randomized gossip algorithm for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes.
no code implementations • 27 Jan 2019 • Konstantin Mishchenko, Filip Hanzely, Peter Richtárik
We propose a fix based on a new update-sparsification method we develop in this work, which we suggest be used on top of existing methods.
no code implementations • NeurIPS 2018 • Filip Hanzely, Konstantin Mishchenko, Peter Richtarik
In each iteration, SEGA updates the current estimate of the gradient through a sketch-and-project operation using the information provided by the latest sketch, and this is subsequently used to compute an unbiased estimate of the true gradient through a random relaxation procedure.
no code implementations • 21 May 2018 • Aritra Dutta, Filip Hanzely, Peter Richtárik
Robust principal component analysis (RPCA) is a well-studied problem with the goal of decomposing a matrix into the sum of low-rank and sparse components.
no code implementations • 23 Jun 2017 • Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik, Dmitry Grishchenko
In this work we present three different randomized gossip algorithms for solving the average consensus problem while at the same time protecting the information about the initial private values stored at the nodes.
Optimization and Control