Search Results for author: Osama A. Hanna

Found 7 papers, 0 papers with code

Multi-Agent Bandit Learning through Heterogeneous Action Erasure Channels

no code implementations21 Dec 2023 Osama A. Hanna, Merve Karakas, Lin F. Yang, Christina Fragouli

To our knowledge, these are the first algorithms capable of effectively learning through heterogeneous action erasure channels.

Scheduling

Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms

no code implementations8 Nov 2022 Osama A. Hanna, Lin F. Yang, Christina Fragouli

When the context distribution is unknown, we establish an algorithm that reduces the stochastic contextual instance to a sequence of linear bandit instances with small misspecifications and achieves nearly the same worst-case regret bound as the algorithm that solves the misspecified linear bandit instances.

Multi-Armed Bandits

Differentially Private Stochastic Linear Bandits: (Almost) for Free

no code implementations7 Jul 2022 Osama A. Hanna, Antonious M. Girgis, Christina Fragouli, Suhas Diggavi

In the shuffled model, we also achieve regret of $\tilde{O}(\sqrt{T}+\frac{1}{\epsilon})$ %for small $\epsilon$ as in the central case, while the best previously known algorithm suffers a regret of $\tilde{O}(\frac{1}{\epsilon}{T^{3/5}})$.

Learning in Distributed Contextual Linear Bandits Without Sharing the Context

no code implementations8 Jun 2022 Osama A. Hanna, Lin F. Yang, Christina Fragouli

Contextual linear bandits is a rich and theoretically important model that has many practical applications.

Solving Multi-Arm Bandit Using a Few Bits of Communication

no code implementations11 Nov 2021 Osama A. Hanna, Lin F. Yang, Christina Fragouli

Existing works usually fail to address this issue and can become infeasible in certain applications.

Active Learning Quantization

Quantizing data for distributed learning

no code implementations14 Dec 2020 Osama A. Hanna, Yahya H. Ezzeldin, Christina Fragouli, Suhas Diggavi

In this paper, we propose an alternate approach to learn from distributed data that quantizes data instead of gradients, and can support learning over applications where the size of gradient updates is prohibitive.

Quantization

On Distributed Quantization for Classification

no code implementations1 Nov 2019 Osama A. Hanna, Yahya H. Ezzeldin, Tara Sadjadpour, Christina Fragouli, Suhas Diggavi

We consider the problem of distributed feature quantization, where the goal is to enable a pretrained classifier at a central node to carry out its classification on features that are gathered from distributed nodes through communication constrained channels.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.