no code implementations • 3 Apr 2024 • Zhigen Zhao, Shuo Cheng, Yan Ding, Ziyi Zhou, Shiqi Zhang, Danfei Xu, Ye Zhao
Task and Motion Planning (TAMP) integrates high-level task planning and low-level motion planning to equip robots with the autonomy to effectively reason over long-horizon, dynamic tasks.
no code implementations • 2 Nov 2023 • Shuo Cheng, Caelan Garrett, Ajay Mandlekar, Danfei Xu
Developing intelligent robots for complex manipulation tasks in household and factory settings remains challenging due to long-horizon tasks, contact-rich manipulation, and the need to generalize across a wide variety of object shapes and scene layouts.
no code implementations • 1 Nov 2023 • Shangjie Xue, Shuo Cheng, Pujith Kachana, Danfei Xu
We present a learning-based dynamics model for granular material manipulation.
no code implementations • 22 Oct 2023 • Sachit Kuhar, Shuo Cheng, Shivang Chopra, Matthew Bronars, Danfei Xu
Furthermore, the intrinsic heterogeneity in human behavior can produce equally successful but disparate demonstrations, further exacerbating the challenge of discerning demonstration quality.
no code implementations • 23 Oct 2022 • Shuo Cheng, Danfei Xu
We also show that the learned skills can be reused to accelerate learning in new tasks domains and transfer to a physical robot platform.
no code implementations • 13 May 2022 • Shuo Cheng, Guoxian Song, Wan-Chun Ma, Chao Wang, Linjie Luo
We present a framework that uses GAN-augmented images to complement certain specific attributes, usually underrepresented, for machine learning model training.
no code implementations • 22 Oct 2021 • Jiachen Li, Shuo Cheng, Zhenyu Liao, Huayan Wang, William Yang Wang, Qinxun Bai
Improving the sample efficiency of reinforcement learning algorithms requires effective exploration.
1 code implementation • 18 Sep 2021 • Shuo Cheng, Kaichun Mo, Lin Shao
In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses.
1 code implementation • CVPR 2020 • Shuo Cheng, Zexiang Xu, Shilin Zhu, Zhuwen Li, Li Erran Li, Ravi Ramamoorthi, Hao Su
In contrast, we propose adaptive thin volumes (ATVs); in an ATV, the depth hypothesis of each plane is spatially varying, which adapts to the uncertainties of previous per-pixel depth predictions.
Ranked #13 on 3D Reconstruction on DTU
1 code implementation • CVPR 2020 • Uday Kusupati, Shuo Cheng, Rui Chen, Hao Su
We couple the learning of a multi-view normal estimation module and a multi-view depth estimation module.
2 code implementations • 27 Aug 2019 • An Yan, Shuo Cheng, Wang-Cheng Kang, Mengting Wan, Julian McAuley
Sequential patterns play an important role in building modern recommender systems.
no code implementations • CVPR 2018 • Jingwei Xu, Bingbing Ni, Zefan Li, Shuo Cheng, Xiaokang Yang
Despite recent emergence of adversarial based methods for video prediction, existing algorithms often produce unsatisfied results in image regions with rich structural information (i. e., object boundary) and detailed motion (i. e., articulated body movement).
no code implementations • CVPR 2018 • Huanyu Yu, Shuo Cheng, Bingbing Ni, Minsi Wang, Jian Zhang, Xiaokang Yang
First, to facilitate this novel research of fine-grained video caption, we collected a novel dataset called Fine-grained Sports Narrative dataset (FSN) that contains 2K sports videos with ground-truth narratives from YouTube. com.
no code implementations • CVPR 2018 • Jinxian Liu, Bingbing Ni, Yichao Yan, Peng Zhou, Shuo Cheng, Jianguo Hu
On the other hand, in addition to the conventional discriminator of GAN (i. e., to distinguish between REAL/FAKE samples), we propose a novel guider sub-network which encourages the generated sample (i. e., with novel pose) towards better satisfying the ReID loss (i. e., cross-entropy ReID loss, triplet ReID loss).