no code implementations • 8 Feb 2024 • Jiin Woo, Laixi Shi, Gauri Joshi, Yuejie Chi
Our sample complexity analysis reveals that, with appropriately chosen parameters and synchronization schedules, FedLCB-Q achieves linear speedup in terms of the number of agents without requiring high-quality datasets at individual agents, as long as the local datasets collectively cover the state-action space visited by the optimal policy, highlighting the power of collaboration in the federated setting.
no code implementations • 18 May 2023 • Jiin Woo, Gauri Joshi, Yuejie Chi
When the data used for reinforcement learning (RL) are collected by multiple agents in a distributed manner, federated versions of RL algorithms allow collaborative learning without the need for agents to share their local data.