no code implementations • 6 Jan 2024 • Chujie Zhao, Tianren Zhang, Feng Chen
In light of this, we propose a simple yet effective method termed STEP (Silent Feature Preservation) to improve the generalization performance of the self-supervised contrastive learning pre-trained model by alleviating the suppression of silent features during the supervised fine-tuning process.
no code implementations • 26 Jan 2022 • Yihan Li, Jinsheng Ren, Tianrun Xu, Tianren Zhang, Haichuan Gao, Feng Chen
Recently, incorporating natural language instructions into reinforcement learning (RL) to learn semantically meaningful representations and foster generalization has caught many concerns.
no code implementations • 30 Oct 2021 • Tianren Zhang, Shangqi Guo, Tian Tan, Xiaolin Hu, Feng Chen
Searching in a large goal space poses difficulty for both high-level subgoal generation and low-level policy learning.
no code implementations • 27 Aug 2021 • Tianren Zhang, Yizhou Jiang, Xin Su, Shangqi Guo, Feng Chen
In this paper, we present a novel supervised learning framework of learning from open-ended data, which is modeled as data implicitly sampled from multiple domains with the data in each domain obeying a domain-specific target function.
1 code implementation • 17 Jun 2021 • Chongkai Gao, Haichuan Gao, Shangqi Guo, Tianren Zhang, Feng Chen
Imitation learning (IL) algorithms have shown promising results for robots to learn skills from expert demonstrations.
1 code implementation • NeurIPS 2020 • Tianren Zhang, Shangqi Guo, Tian Tan, Xiaolin Hu, Feng Chen
In this paper, we show that this problem can be effectively alleviated by restricting the high-level action space from the whole goal space to a $k$-step adjacent region of the current state using an adjacency constraint.