1 code implementation • 6 Feb 2024 • Jiafei Lyu, Xiaoteng Ma, Le Wan, Runze Liu, Xiu Li, Zongqing Lu
Offline reinforcement learning (RL) has attracted much attention due to its ability in learning from static offline datasets and eliminating the need of interacting with the environment.
no code implementations • 5 Feb 2024 • Jiafei Lyu, Le Wan, Xiu Li, Zongqing Lu
Recently, there are many efforts attempting to learn useful policies for continuous control in visual reinforcement learning (RL).
no code implementations • 29 May 2023 • Jiafei Lyu, Le Wan, Zongqing Lu, Xiu Li
Empirical results show that SMR significantly boosts the sample efficiency of the base methods across most of the evaluated tasks without any hyperparameter tuning or additional tricks.
1 code implementation • 10 Apr 2023 • Junjie Zhang, Jiafei Lyu, Xiaoteng Ma, Jiangpeng Yan, Jun Yang, Le Wan, Xiu Li
To empirically show the advantages of TATU, we first combine it with two classical model-based offline RL algorithms, MOPO and COMBO.
no code implementations • 9 Oct 2022 • Jiafei Lyu, Aicheng Gong, Le Wan, Zongqing Lu, Xiu Li
We present state advantage weighting for offline reinforcement learning (RL).