1 code implementation • 10 Jun 2023 • Shivangi Dubey Sharma, Ketan Rajawat
This work considers the problem of decentralized online learning, where the goal is to track the optimum of the sum of time-varying functions, distributed across several nodes in a network.
1 code implementation • 26 May 2023 • Aakash Lahoti, Spandan Senapati, Ketan Rajawat, Alec Koppel
Specifically, they exhibit a superlinear rate with $O(d^2)$ cost in contrast to the linear rate of first-order methods with $O(d)$ cost and the quadratic rate of second-order methods with $O(d^3)$ cost.
1 code implementation • 3 May 2023 • Yogesh Darmwal, Ketan Rajawat
This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold.
1 code implementation • 17 Jun 2022 • Anis Elgabli, Chaouki Ben Issaid, Amrit S. Bedi, Ketan Rajawat, Mehdi Bennis, Vaneet Aggarwal
Newton-type methods are popular in federated learning due to their fast convergence.
no code implementations • 20 Jan 2022 • Charul Paliwal, Pravesh Biyani, Ketan Rajawat
We evaluate the proposed Variational Bayesian Filtering with Subspace Information (VBFSI) method to impute matrices in real-world traffic and air pollution data.
no code implementations • 22 Oct 2021 • Zeeshan Akhtar, Amrit Singh Bedi, Srujan Teja Thomdapu, Ketan Rajawat
The proposed $\textbf{S}$tochastic $\textbf{C}$ompositional $\textbf{F}$rank-$\textbf{W}$olfe ($\textbf{SCFW}$) is shown to achieve a sample complexity of $\mathcal{O}(\epsilon^{-2})$ for convex objectives and $\mathcal{O}(\epsilon^{-3})$ for non-convex objectives, at par with the state-of-the-art sample complexities for projection-free algorithms solving single-level problems.
no code implementations • 14 Jul 2021 • Zeeshan Akhtar, Ketan Rajawat
This paper considers stochastic convex optimization problems with two sets of constraints: (a) deterministic constraints on the domain of the optimization variable, which are difficult to project onto; and (b) deterministic or stochastic constraints that admit efficient projection.
no code implementations • NeurIPS 2021 • Prashant Khanduri, Pranay Sharma, Haibo Yang, Mingyi Hong, Jia Liu, Ketan Rajawat, Pramod K. Varshney
Despite extensive research, for a generic non-convex FL problem, it is not clear, how to choose the WNs' and the server's update directions, the minibatch sizes, and the local update frequency, so that the WNs use the minimum number of samples and communication rounds to achieve the desired solution.
no code implementations • 17 Dec 2020 • Srujan Teja Thomdapu, Harshvardhan, Ketan Rajawat
Of particular interest is the large-scale setting where an oracle provides the stochastic gradients of the constituent functions, and the goal is to solve the problem with a minimal number of calls to the oracle.
no code implementations • 26 Nov 2020 • Shuubham Ojha, Ketan Rajawat
In this paper we introduce a co-operative control strategy which makes convergence to optimum robust to communication delays.
Distributed Optimization Optimization and Control Systems and Control Systems and Control
no code implementations • 13 Nov 2020 • Abhishek Chakraborty, Ketan Rajawat, Alec Koppel
We consider expected risk minimization problems when the range of the estimator is required to be nonnegative, motivated by the settings of maximum likelihood estimation (MLE) and trajectory optimization.
1 code implementation • 3 Oct 2020 • Basil M. Idrees, Javed Akhtar, Ketan Rajawat
In the online setting, where a single sample of the stochastic gradient of the loss is available at every iteration, the problem can be solved using the proximal stochastic gradient descent (SGD) algorithm and its variants.
no code implementations • 13 Aug 2020 • Zeeshan Akhtar, Amrit Singh Bedi, Ketan Rajawat
In this work, we propose the FW-CSOA algorithm that is not only projection-free but also achieves zero constraint violation with $\O\left(T^{-\frac{1}{4}}\right)$ decay of the optimality gap.
no code implementations • 1 May 2020 • Prashant Khanduri, Pranay Sharma, Swatantra Kafle, Saikiran Bulusu, Ketan Rajawat, Pramod K. Varshney
In this work, we propose a distributed algorithm for stochastic non-convex optimization.
Optimization and Control Distributed, Parallel, and Cluster Computing
no code implementations • 23 Apr 2020 • Alec Koppel, Hrusikesha Pradhan, Ketan Rajawat
Gaussian processes provide a framework for nonlinear nonparametric Bayesian inference widely applicable across science and engineering.
no code implementations • 12 Dec 2019 • Pranay Sharma, Swatantra Kafle, Prashant Khanduri, Saikiran Bulusu, Ketan Rajawat, Pramod K. Varshney
For online problems ($n$ unknown or infinite), we achieve the optimal IFO complexity $O(\epsilon^{-3/2})$.
no code implementations • 25 Sep 2019 • Alec Koppel, Amrit Singh Bedi, Ketan Rajawat, Brian M. Sadler
Batch training of machine learning models based on neural networks is now well established, whereas to date streaming methods are largely based on linear models.
no code implementations • 12 Sep 2019 • Amrit Singh Bedi, Alec Koppel, Ketan Rajawat, Brian M. Sadler
Prior works control dynamic regret growth only for linear models.
no code implementations • 1 Aug 2019 • Hrusikesha Pradhan, Amrit Singh Bedi, Alec Koppel, Ketan Rajawat
We consider learning in decentralized heterogeneous networks: agents seek to minimize a convex functional that aggregates data across the network, while only having access to their local data streams.
no code implementations • 21 Jul 2019 • Sandeep Kumar, Ketan Rajawat, Daniel P. Palomar
Different from a number of existing approaches, however, the proposed framework is flexible enough to incorporate a class of non-convex objective functions, allow distributed operation with and without a fusion center, and include variance reduced methods as special cases.
no code implementations • 16 May 2019 • Rishabh Dixit, Amrit Singh Bedi, Ketan Rajawat
The empirical performance of the proposed algorithm is tested on the distributed dynamic sparse recovery problem, where it is shown to incur a dynamic regret that is close to that of the centralized algorithm.
no code implementations • 21 Dec 2016 • Ketan Rajawat, Sandeep Kumar
Multidimensional scaling (MDS) is a popular dimensionality reduction techniques that has been widely used for network visualization and cooperative localization.