no code implementations • 2 Feb 2024 • Aleksandar Armacki, Dragana Bajović, Dušan Jakovetić, Soummya Kar
The proposed family, termed Distributed Gradient Clustering (DGC-$\mathcal{F}_\rho$), is parametrized by $\rho \geq 1$, controling the proximity of users' center estimates, with $\mathcal{F}$ determining the clustering loss.
no code implementations • 28 Oct 2023 • Aleksandar Armacki, Pranay Sharma, Gauri Joshi, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
First, for non-convex costs and component-wise nonlinearities, we establish a convergence rate arbitrarily close to $\mathcal{O}\left(t^{-\frac{1}{4}}\right)$, whose exponent is independent of noise and problem parameters.
no code implementations • 22 Sep 2022 • Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
In the proposed setup, the grouping of users (based on the data distributions they sample), as well as the underlying statistical properties of the distributions, are apriori unknown.
no code implementations • 1 Feb 2022 • Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
We propose a general approach for distance based clustering, using the gradient of the cost function that measures clustering quality with respect to cluster assignments and cluster center positions.
no code implementations • 1 Feb 2022 • Aleksandar Armacki, Dragana Bajovic, Dusan Jakovetic, Soummya Kar
The proposed framework is based on a generalization of convex clustering in which the differences between different users' models are penalized via a sum-of-norms penalty, weighted by a penalty parameter $\lambda$.