no code implementations • NAACL (ACL) 2022 • Tao Zhu, Zhe Zhao, Weijie Liu, Jiachi Liu, Yiren Chen, Weiquan Mao, Haoyan Liu, Kunbo Ding, Yudong Li, Xuefeng Yang
Catastrophic forgetting is a challenge for model deployment in industrial real-time systems, which requires the model to quickly master a new task without forgetting the old one.
no code implementations • 19 May 2023 • Xingyu Bai, Taiqiang Wu, Han Guo, Zhe Zhao, Xuefeng Yang, Jiayi Li, Weijie Liu, Qi Ju, Weigang Guo, Yujiu Yang
Event Extraction (EE), aiming to identify and classify event triggers and arguments from event mentions, has benefited from pre-trained language models (PLMs).
no code implementations • 28 Oct 2022 • Xuefeng Yang, Li Liu, Wenju Zhou, Jing Shi, Yinggang Zhang, Xin Hu, Huiyu Zhou
Moreover, the privacy of the system is analyzed to ensure the security of the real data.
1 code implementation • Findings (NAACL) 2022 • Kunbo Ding, Weijie Liu, Yuejian Fang, Zhe Zhao, Qi Ju, Xuefeng Yang
Previous studies have proved that cross-lingual knowledge distillation can significantly improve the performance of pre-trained models for cross-lingual similarity matching tasks.
1 code implementation • 14 Feb 2022 • Weijie Liu, Tao Zhu, Weiquan Mao, Zhe Zhao, Weigang Guo, Xuefeng Yang, Qi Ju
In this paper, we pay attention to the issue which is usually overlooked, i. e., \textit{similarity should be determined from different perspectives}.
2 code implementations • 10 Jun 2020 • Ningyuan Sun, Xuefeng Yang, Yunfeng Liu
Existing NL2SQL datasets assume that condition values should appear exactly in natural language questions and the queries are answerable given the table.
no code implementations • IJCNLP 2019 • Pengfei Li, Kezhi Mao, Xuefeng Yang, Qi Li
While attention mechanisms have been proven to be effective in many NLP tasks, majority of them are data-driven.
no code implementations • 24 Sep 2019 • Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, Yunfeng Liu
Conversational Question Answering is a challenging task since it requires understanding of conversational history.
no code implementations • 29 May 2015 • Xuefeng Yang, Kezhi Mao
Inspired by deep learning, the authors propose a supervised framework for learning vector representation of words to provide additional supervised fine tuning after unsupervised learning.