no code implementations • 18 Apr 2024 • Zi Xiong, Lizhi Qing, Yangyang Kang, Jiawei Liu, Hongsong Li, Changlong Sun, Xiaozhong Liu, Wei Lu
The widespread use of pre-trained language models (PLMs) in natural language processing (NLP) has greatly improved performance outcomes.
no code implementations • 10 Apr 2024 • Yongqiang Ma, Lizhi Qing, Jiawei Liu, Yangyang Kang, Yue Zhang, Wei Lu, Xiaozhong Liu, Qikai Cheng
Therefore, our study shifts the focus from model-centered to human-centered evaluation in the context of AI-powered writing assistance applications.
no code implementations • 4 Apr 2024 • Kai Zhang, Lizhi Qing, Yangyang Kang, Xiaozhong Liu
Large Language Models (LLMs) have exhibited remarkable proficiency in comprehending and generating natural language.
1 code implementation • 21 Sep 2023 • Chengyuan Liu, Fubang Zhao, Lizhi Qing, Yangyang Kang, Changlong Sun, Kun Kuang, Fei Wu
There are several black-box attack methods, such as Prompt Attack, which can change the behaviour of LLMs and induce LLMs to generate unexpected answers with harmful contents.
1 code implementation • 18 Nov 2019 • Tao Gui, Lizhi Qing, Qi Zhang, Jiacheng Ye, HangYan, Zichu Fei, Xuanjing Huang
In order to effectively reduce the impact of non-ideal auxiliary tasks on the main task, we further proposed a novel meta-learning-based multi-task learning approach, which trained the shared hidden layers on auxiliary tasks, while the meta-optimization objective was to minimize the loss on the main task, ensuring that the optimizing direction led to an improvement on the main task.