no code implementations • 8 Dec 2023 • Pei Lin, Sihang Xu, Hongdi Yang, Yiran Liu, Xin Chen, Jingya Wang, Jingyi Yu, Lan Xu
We further present a strong baseline method HandDiffuse for the controllable motion generation of interacting hands using various controllers.
no code implementations • 4 Dec 2023 • Yongzhuo Chen, Yixuan Liang, Yiran Liu, Brian Hobbs, Michael Kane
This research paper addresses the critical challenge of accurately valuing post-revenue drug assets in the biotechnology and pharmaceutical sectors, a key factor influencing a wide range of strategic operations and investment decisions.
1 code implementation • 20 Oct 2023 • Haoran Li, Yiran Liu, Xingxing Zhang, Wei Lu, Furu Wei
Furthermore, we apply probabilistic ranking and contextual ranking sequentially to the instruction-tuned LLM.
2 code implementations • ICCV 2023 • Yiran Liu, Xin Feng, Yunlong Wang, Wu Yang, Di Ming
Aiming at crafting a single universal adversarial perturbation (UAP) to fool CNN models for various data samples, universal attack enables a more efficient and accurate evaluation for the robustness of CNN models.
no code implementations • 8 Dec 2022 • Xingxing Zhang, Yiran Liu, Xun Wang, Pengcheng He, Yang Yu, Si-Qing Chen, Wayne Xiong, Furu Wei
The input and output of most text generation tasks can be transformed to two sequences of tokens and they can be modeled using sequence-to-sequence learning modeling tools such as Transformers.
Ranked #2 on Text Summarization on SAMSum
no code implementations • 14 Nov 2022 • Yiran Liu, Xiao Liu, Haotian Chen, Yang Yu
We use our theoretical framework to explain why the current debiasing methods cause performance degradation.
no code implementations • 6 Nov 2022 • Haotian Chen, Lingwei Zhang, Yiran Liu, Fanchao Chen, Yang Yu
To validate our theoretical analysis, we further propose another method using our proposed Causality-Aware Self-Attention Mechanism (CASAM) to guide the model to learn the underlying causality knowledge in legal texts.
no code implementations • 10 Jan 2022 • Tao Chen, Yiran Liu, Haoyu Jiang, Ruirui Li
While CNN excels at extracting local detail features, the Transformer naturally perceives global contextual information.