1 code implementation • 11 Mar 2024 • Li Yuan, Yi Cai, Haopeng Ren, Jiexin Wang
LMPM incorporates an external memory structure to learn and store the latent representations of logical patterns, which aids in generating logically consistent conclusions.
1 code implementation • 29 Feb 2024 • Yiju Guo, Ganqu Cui, Lifan Yuan, Ning Ding, Jiexin Wang, Huimin Chen, Bowen Sun, Ruobing Xie, Jie zhou, Yankai Lin, Zhiyuan Liu, Maosong Sun
In practice, the multifaceted nature of human preferences inadvertently introduces what is known as the "alignment tax" -a compromise where enhancements in alignment within one objective (e. g., harmlessness) can diminish performance in others (e. g., helpfulness).
no code implementations • 14 Feb 2024 • Jiexin Wang, Jiahao Chen, Bing Su
Although deep neural networks yield high classification accuracy given sufficient training data, their predictions are typically overconfident or under-confident, i. e., the prediction confidences cannot truly reflect the accuracy.
no code implementations • 25 Oct 2023 • Jiexin Wang, Liuwen Cao, Xitong Luo, Zhiping Zhou, Jiayuan Xie, Adam Jatowt, Yi Cai
Moreover, our study identifies weaknesses in existing models' ability to repair vulnerable code, even when provided with vulnerability information.
1 code implementation • 2 Aug 2023 • Jiexin Wang, Yujie Zhou, Wenwen Qiang, Ying Ba, Bing Su, Ji-Rong Wen
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications, but it remains a challenging task due to the stochastic and aperiodic nature of future poses.
no code implementations • 17 Apr 2023 • Jiexin Wang, Jiahao Chen, Bing Su
Auto-evaluation aims to automatically evaluate a trained model on any test dataset without human annotations.
no code implementations • 27 Apr 2022 • Jiexin Wang, Adam Jatowt, Masatoshi Yoshikawa, Yi Cai
Time is an important aspect of documents and is used in a range of NLP and IR tasks.
no code implementations • 8 Sep 2021 • Jiexin Wang, Adam Jatowt, Masatoshi Yoshikawa
In the last few years, open-domain question answering (ODQA) has advanced rapidly due to the development of deep learning techniques and the availability of large-scale QA datasets.