no code implementations • 1 Feb 2024 • Yue Xing, Xiaofeng Lin, Namjoon Suh, Qifan Song, Guang Cheng
In practice, it is observed that transformer-based models can learn concepts in context in the inference stage.
no code implementations • 26 Jan 2024 • Yue Xing, Xiaofeng Lin, Qifan Song, Yi Xu, Belinda Zeng, Guang Cheng
Pre-training is known to generate universal representations for downstream tasks in large-scale deep learning such as large language models.
1 code implementation • 24 Oct 2023 • Namjoon Suh, Xiaofeng Lin, Din-Yin Hsieh, Merhdad Honarkhah, Guang Cheng
Diffusion model has become a main paradigm for synthetic data generation in many subfields of modern machine learning, including computer vision, language model, or speech synthesis.
no code implementations • 28 Sep 2023 • Zirui Xu, Xiaofeng Lin, Vasileios Tzoumas
MetaBSG leverages a meta-algorithm to learn whether the robots should follow the commands or a recently developed submodular coordination algorithm, Bandit Sequential Greedy (BSG) [1], which has performance guarantees even in unpredictable and partially-observable environments.
1 code implementation • 25 Sep 2023 • Xiaofeng Lin, Guoxi Zhang, Xiaotian Lu, Han Bao, Koh Takeuchi, Hisashi Kashima
One popular application of this estimation lies in the prediction of the impact of a treatment (e. g., a promotion) on an outcome (e. g., sales) of a particular unit (e. g., an item), known as the individual treatment effect (ITE).
1 code implementation • 22 May 2023 • Zirui Xu, Xiaofeng Lin, Vasileios Tzoumas
We are motivated by the future of autonomy that involves multiple robots coordinating actions in dynamic, unstructured, and partially observable environments to complete complex tasks such as target tracking, environmental mapping, and area monitoring.
1 code implementation • 22 Jul 2022 • Xiaofeng Lin, Seungbae Kim, Jungseock Joo
Existing pruning techniques preserve deep neural networks' overall ability to make correct predictions but may also amplify hidden biases during the compression process.