1 code implementation • 24 Apr 2024 • Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, Yanghua Xiao
It is imperative for Large language models (LLMs) to follow instructions with elaborate requirements (i. e. Complex Instructions Following).
no code implementations • 4 Apr 2024 • Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, Deqing Yang
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks, but they still struggle with some complicated reasoning tasks including logical reasoning.
no code implementations • 25 Mar 2024 • Wenhao Huang, Qianyu He, Zhixu Li, Jiaqing Liang, Yanghua Xiao
Definition bias is a negative phenomenon that can mislead models.
no code implementations • 14 Mar 2024 • Yuncheng Huang, Qianyu He, Yipei Xu, Jiaqing Liang, Yanghua Xiao
In our experiments, we find that atomic skills can not spontaneously generalize to compositional tasks.
no code implementations • 14 Jan 2024 • Haixia Han, Jiaqing Liang, Jie Shi, Qianyu He, Yanghua Xiao
In this paper, we introduce the \underline{I}ntrinsic \underline{S}elf-\underline{C}orrection (ISC) in generative language models, aiming to correct the initial output of LMs in a self-triggered manner, even for those small LMs with 6 billion parameters.
no code implementations • 29 Dec 2023 • Yuncheng Huang, Qianyu He, Jiaqing Liang, Sihang Jiang, Yanghua Xiao, Yunwen Chen
Hence, we present a framework to enhance the quantitative reasoning ability of language models based on dimension perception.
2 code implementations • 17 Sep 2023 • Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao
To bridge this gap, we propose CELLO, a benchmark for evaluating LLMs' ability to follow complex instructions systematically.
no code implementations • 17 Aug 2023 • Xintao Wang, Qianwen Yang, Yongting Qiu, Jiaqing Liang, Qianyu He, Zhouhong Gu, Yanghua Xiao, Wei Wang
Large language models (LLMs) have demonstrated impressive impact in the field of natural language processing, but they still struggle with several issues regarding, such as completeness, timeliness, faithfulness and adaptability.
1 code implementation • 13 Jun 2023 • Qianyu He, Yikai Zhang, Jiaqing Liang, Yuncheng Huang, Yanghua Xiao, Yunwen Chen
Similes play an imperative role in creative writing such as story and dialogue generation.
2 code implementations • 9 Jun 2023 • Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Jianchen Wang, Yixin Zhu, Sihang Jiang, Zhuozhi Xiong, Zihan Li, Weijie Wu, Qianyu He, Rui Xu, Wenhao Huang, Jingping Liu, Zili Wang, Shusen Wang, Weiguo Zheng, Hongwei Feng, Yanghua Xiao
New Natural Langauge Process~(NLP) benchmarks are urgently needed to align with the rapid development of large language models (LLMs).
no code implementations • 23 Apr 2023 • Zhouhong Gu, Xiaoxuan Zhu, Haoning Ye, Lin Zhang, Zhuozhi Xiong, Zihan Li, Qianyu He, Sihang Jiang, Hongwei Feng, Yanghua Xiao
Domain knowledge refers to the in-depth understanding, expertise, and familiarity with a specific subject, industry, field, or area of special interest.
2 code implementations • 18 Feb 2023 • Dakuan Lu, Hengkui Wu, Jiaqing Liang, Yipei Xu, Qianyu He, Yipeng Geng, Mengkun Han, Yingsi Xin, Yanghua Xiao
Our aim is to facilitate research in the development of NLP within the Chinese financial domain.
2 code implementations • 10 Dec 2022 • Qianyu He, Xintao Wang, Jiaqing Liang, Yanghua Xiao
The ability to understand and generate similes is an imperative step to realize human-level AI.
1 code implementation • 25 Jun 2022 • Xintao Wang, Qianyu He, Jiaqing Liang, Yanghua Xiao
In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods.
Ranked #3 on Link Prediction on WN18RR
1 code implementation • ACL 2022 • Qianyu He, Sijie Cheng, Zhixu Li, Rui Xie, Yanghua Xiao
In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i. e., to let the PLMs infer the shared properties of similes.