DQN Replay Dataset
6 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in DQN Replay Dataset
Libraries
Use these libraries to find DQN Replay Dataset models and implementationsMost implemented papers
Conservative Q-Learning for Offline Reinforcement Learning
We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.
Acme: A Research Framework for Distributed Reinforcement Learning
These implementations serve both as a validation of our design decisions as well as an important contribution to reproducibility in RL research.
RL Unplugged: A Suite of Benchmarks for Offline Reinforcement Learning
We hope that our suite of benchmarks will increase the reproducibility of experiments and make it possible to study challenging tasks with a limited computational budget, thus making RL research both more systematic and more accessible across the community.
Revisiting Fundamentals of Experience Replay
Experience replay is central to off-policy algorithms in deep reinforcement learning (RL), but there remain significant gaps in our understanding.
An Optimistic Perspective on Offline Reinforcement Learning
The DQN replay dataset can serve as an offline RL benchmark and is open-sourced.