no code implementations • 2 May 2024 • Sajjad Ghiasvand, Amirhossein Reisizadeh, Mahnoosh Alizadeh, Ramtin Pedarsani
Having local updates is essential in Federated Learning (FL) applications to mitigate the communication bottleneck, and utilizing gradient tracking is essential to proving convergence in the case of data heterogeneity.
no code implementations • 5 Feb 2024 • Payam Delgosha, Hamed Hassani, Ramtin Pedarsani
In this paper, we focus on the $\ell_0$-bounded adversarial attacks, and aim to theoretically characterize the performance of adversarial training for an important class of truncated classifiers.
no code implementations • 4 Feb 2024 • Justin S. Kang, Yigit E. Erginbas, Landon Butler, Ramtin Pedarsani, Kannan Ramchandran
In the case where all interactions are between at most $t = \Theta(n^{\alpha})$ inputs, for $\alpha < 0. 409$, we are able to leverage results from group testing to provide the first algorithm that computes the Mobius transform in $O(Kt\log n)$ sample complexity and $O(K\mathrm{poly}(n))$ time with vanishing error as $K \rightarrow \infty$.
no code implementations • 2 Feb 2024 • Mark Beliaev, Ramtin Pedarsani
In Imitation Learning (IL), utilizing suboptimal and heterogeneous demonstrations presents a substantial challenge due to the varied nature of real-world data.
no code implementations • 17 Mar 2023 • Mark Beliaev, Negar Mehr, Ramtin Pedarsani
In this paper we study the pickup and delivery problem with multiple transportation modalities, and address the challenge of efficiently allocating transportation resources while price matching users with their desired delivery modes.
no code implementations • 30 Jan 2023 • Justin Kang, Ramtin Pedarsani, Kannan Ramchandran
We also formulate a heterogeneous federated learning problem for the platform with privacy level options for users.
1 code implementation • 13 Oct 2022 • Ozgur Guldogan, Yuchen Zeng, Jy-yong Sohn, Ramtin Pedarsani, Kangwook Lee
In order to promote long-term fairness, we propose a new fairness notion called Equal Improvability (EI), which equalizes the potential acceptance rate of the rejected samples across different groups assuming a bounded level of effort will be spent by each rejected sample.
no code implementations • 6 Jun 2022 • Farzan Farnia, Amirhossein Reisizadeh, Ramtin Pedarsani, Ali Jadbabaie
In this paper, we focus on this problem and propose a novel personalized Federated Learning scheme based on Optimal Transport (FedOT) as a learning algorithm that learns the optimal transport maps for transferring data points to a common distribution as well as the prediction model under the applied transport map.
1 code implementation • 5 Jun 2022 • Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari
Federated Learning is an emerging learning paradigm that allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
no code implementations • 9 Mar 2022 • Payam Delgosha, Hamed Hassani, Ramtin Pedarsani
We introduce a classification method which employs a nonlinear component called truncation, and show in an asymptotic scenario, as long as the adversary is restricted to perturb no more than $\sqrt{d}$ data samples, we can almost achieve the optimal classification error in the absence of the adversary, i. e. we can completely neutralize adversary's effect.
1 code implementation • 2 Feb 2022 • Mark Beliaev, Andy Shih, Stefano Ermon, Dorsa Sadigh, Ramtin Pedarsani
In this work, we show that unsupervised learning over demonstrator expertise can lead to a consistent boost in the performance of imitation learning algorithms.
no code implementations • 23 Jan 2022 • Mark Beliaev, Payam Delgosha, Hamed Hassani, Ramtin Pedarsani
In the past two decades we have seen the popularity of neural networks increase in conjunction with their classification accuracy.
no code implementations • 4 Dec 2021 • Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani
We derive the worst-case attack for the GLRT defense, and show that its asymptotic performance (as the dimension of the data increases) approaches that of the minimax defense.
no code implementations • 13 May 2021 • Woodrow Z. Wang, Mark Beliaev, Erdem Biyik, Daniel A. Lazar, Ramtin Pedarsani, Dorsa Sadigh
Coordination is often critical to forming prosocial behaviors -- behaviors that increase the overall sum of rewards received by all agents in a multi-agent game.
no code implementations • 6 May 2021 • Erdem Biyik, Daniel A. Lazar, Ramtin Pedarsani, Dorsa Sadigh
Traffic congestion has large economic and social costs.
no code implementations • 5 Apr 2021 • Payam Delgosha, Hamed Hassani, Ramtin Pedarsani
Under the assumption that data is distributed according to the Gaussian mixture model, our goal is to characterize the optimal robust classifier and the corresponding robust classification error as well as a variety of trade-offs between robustness, accuracy, and the adversary's budget.
no code implementations • 28 Dec 2020 • Mark Beliaev, Erdem Biyik, Daniel A. Lazar, Woodrow Z. Wang, Dorsa Sadigh, Ramtin Pedarsani
In turn, significant increases in traffic congestion are expected, since people are likely to prefer using their own vehicles or taxis as opposed to riskier and more crowded options such as the railway.
no code implementations • 28 Dec 2020 • Amirhossein Reisizadeh, Isidoros Tziotis, Hamed Hassani, Aryan Mokhtari, Ramtin Pedarsani
Federated Learning is a novel paradigm that involves learning from data samples distributed across a large network of clients while the data remains local.
no code implementations • 16 Nov 2020 • Bhagyashree Puranik, Upamanyu Madhow, Ramtin Pedarsani
We evaluate the GLRT approach for the special case of binary hypothesis testing in white Gaussian noise under $\ell_{\infty}$ norm-bounded adversarial perturbations, a setting for which a minimax strategy optimizing for the worst-case attack is known.
no code implementations • 26 Oct 2020 • Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis
It has been consistently reported that many machine learning models are susceptible to adversarial attacks i. e., small additive adversarial perturbations applied to data points can cause misclassification.
no code implementations • NeurIPS 2020 • Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, Ali Jadbabaie
In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model.
no code implementations • 16 Jun 2020 • Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis
For a stylized setting with Gaussian features and problem dimensions that grow large at a proportional rate, we start with sharp performance characterizations and then derive tight lower bounds on the estimation and prediction error that hold over a wide class of loss functions and for any value of the regularization parameter.
no code implementations • ICML 2020 • Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph.
1 code implementation • 22 Feb 2020 • Can Bakiskan, Soorya Gopalakrishnan, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
The vulnerability of deep neural networks to small, adversarially designed perturbations can be attributed to their "excessive linearity."
no code implementations • 17 Feb 2020 • Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis
We study convex empirical risk minimization for high-dimensional inference in binary models.
no code implementations • 28 Sep 2019 • Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ali Jadbabaie, Ramtin Pedarsani
Federated learning is a distributed framework according to which a model is trained over a set of devices, while keeping data localized.
no code implementations • 12 Aug 2019 • Hossein Taheri, Ramtin Pedarsani, Christos Thrampoulidis
We study the performance of a wide class of convex optimization-based estimators for recovering a signal from corrupted one-bit measurements in high-dimensions.
1 code implementation • NeurIPS 2019 • Amirhossein Reisizadeh, Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
We consider a decentralized learning problem, where a set of computing nodes aim at solving a non-convex optimization problem collaboratively.
no code implementations • 6 Feb 2019 • Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr
That is, it parallelizes the communications over a tree topology leading to efficient bandwidth utilization, and carefully designs a redundant data set allocation and coding strategy at the nodes to make the proposed gradient aggregation scheme robust to stragglers.
1 code implementation • 24 Oct 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Metehan Cekic, Upamanyu Madhow, Ramtin Pedarsani
We also devise attacks based on the locally linear model that outperform the well-known FGSM attack.
no code implementations • 29 Jun 2018 • Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
We consider the problem of decentralized consensus optimization, where the sum of $n$ smooth and strongly convex functions are minimized over $n$ distributed agents that form a connected network.
3 code implementations • 11 Mar 2018 • Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, Ramtin Pedarsani
It is by now well-known that small adversarial perturbations can induce classification errors in deep neural networks (DNNs).
3 code implementations • 15 Jan 2018 • Zhinus Marzi, Soorya Gopalakrishnan, Upamanyu Madhow, Ramtin Pedarsani
In this paper, we study this phenomenon in the setting of a linear classifier, and show that it is possible to exploit sparsity in natural data to combat $\ell_{\infty}$-bounded adversarial perturbations.
1 code implementation • 21 Jan 2017 • Amirhossein Reisizadeh, Saurav Prakash, Ramtin Pedarsani, Amir Salman Avestimehr
There have been recent results that demonstrate the impact of coding for efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in homogeneous clusters.
Distributed, Parallel, and Cluster Computing Information Theory Information Theory
no code implementations • 8 Dec 2015 • Kangwook Lee, Maximilian Lam, Ramtin Pedarsani, Dimitris Papailiopoulos, Kannan Ramchandran
We focus on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling.