no code implementations • Findings (NAACL) 2022 • Lu Sun, Yongliang Shen, Weiming Lu
In this paper, we propose a novel method to induce relation with BERT under the minimally-supervised setting.
no code implementations • ACL 2022 • Shuai Zhang, Yongliang Shen, Zeqi Tan, Yiquan Wu, Weiming Lu
Named entity recognition (NER) is a fundamental task to recognize specific types of entities from a given sentence.
no code implementations • 22 Apr 2024 • Xiaoxia Cheng, Zeqi Tan, Weiming Lu
In this paper, we propose an information re-organization (InfoRE) method before proceeding with the reasoning to enhance the reasoning ability of LLMs.
no code implementations • 14 Mar 2024 • Chang Zong, Yuyan Chen, Weiming Lu, Jian Shao, Yueting Zhuang
Large Language Models (LLMs) have demonstrated efficacy in various linguistic applications, including text summarization and controlled text generation.
1 code implementation • 27 Feb 2024 • Wenqi Zhang, Ke Tang, Hai Wu, Mengna Wang, Yongliang Shen, Guiyang Hou, Zeqi Tan, Peng Li, Yueting Zhuang, Weiming Lu
Large Language Models exhibit robust problem-solving capabilities for diverse tasks.
no code implementations • 22 Feb 2024 • Chang Zong, Yuchen Yan, Weiming Lu, Jian Shao, Eliot Huang, Heng Chang, Yueting Zhuang
We evaluated the performance of our framework using three benchmark datasets, and the results show that our framework outperforms state-of-the-art systems on the LC-QuAD and YAGO-QA benchmarks, yielding F1 scores of 11. 8% and 20. 7%, respectively.
no code implementations • 4 Jan 2024 • Wenqi Zhang, Yongliang Shen, Linjuan Wu, Qiuying Peng, Jun Wang, Yueting Zhuang, Weiming Lu
Experiments conducted on a series of reasoning and translation tasks with different LLMs serve to underscore the effectiveness and generality of our strategy.
1 code implementation • 30 Nov 2023 • Yongliang Shen, Kaitao Song, Xu Tan, Wenqi Zhang, Kan Ren, Siyu Yuan, Weiming Lu, Dongsheng Li, Yueting Zhuang
To this end, we introduce TaskBench to evaluate the capability of LLMs in task automation.
no code implementations • 14 Oct 2023 • Wenqi Zhang, Yongliang Shen, Qingpeng Nong, Zeqi Tan, Yanna Ma, Weiming Lu
To generate a tree with expression as its node, we employ a layer-wise parallel decoding strategy: we decode multiple independent expressions (leaf nodes) in parallel at each layer and repeat parallel decoding layer by layer to sequentially generate these parent node expressions that depend on others.
Ranked #2 on Math Word Problem Solving on MathQA
no code implementations • 13 Oct 2023 • Yiquan Wu, Siying Zhou, Yifei Liu, Weiming Lu, Xiaozhong Liu, Yating Zhang, Changlong Sun, Fei Wu, Kun Kuang
Precedents are the previous legal cases with similar facts, which are the basis for the judgment of the subsequent case in national legal systems.
1 code implementation • 12 Oct 2023 • Shuhui Wu, Yongliang Shen, Zeqi Tan, Wenqi Ren, Jietian Guo, ShiLiang Pu, Weiming Lu
Distantly supervised named entity recognition (DS-NER) aims to locate entity mentions and classify their types with only knowledge bases or gazetteers and unlabeled corpus.
no code implementations • 18 Aug 2023 • Shuhui Wu, Zengming Tang, Zongyi Guo, Weiwei Zhang, Baoliang Cui, Haihong Tang, Weiming Lu
Simultaneously, we utilize open-domain datasets during training to improve the performance of PUMGPT and its generalization ability.
1 code implementation • 12 Jun 2023 • Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
Various industries such as finance, meteorology, and energy produce vast amounts of heterogeneous data every day.
1 code implementation • 26 May 2023 • Yongliang Shen, Zeqi Tan, Shuhui Wu, Wenqi Zhang, Rongsheng Zhang, Yadong Xi, Weiming Lu, Yueting Zhuang
Prompt learning is a new paradigm for utilizing pre-trained language models and has achieved great success in many tasks.
Ranked #1 on Nested Named Entity Recognition on ACE 2004
2 code implementations • 22 May 2023 • Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
In this paper, we propose DiffusionNER, which formulates the named entity recognition task as a boundary-denoising diffusion process and thus generates named entities from noisy spans.
Ranked #2 on Nested Named Entity Recognition on GENIA
1 code implementation • 18 May 2023 • Wei Xue, Yongliang Shen, Wenqi Ren, Jietian Guo, ShiLiang Pu, Weiming Lu
Specifically, TaxBox consists of three components: (1) a graph aggregation module to leverage the structural information of the taxonomy and two lightweight decoders that map features to box embedding and capture complex relationships between concepts; (2) two probabilistic scorers that correspond to attachment and insertion operations and ensure the avoidance of pseudo-leaves; and (3) three learning objectives that assist the model in mapping concepts more granularly onto the box embedding space.
1 code implementation • 5 May 2023 • Zeqi Tan, Shen Huang, Zixia Jia, Jiong Cai, Yinghui Li, Weiming Lu, Yueting Zhuang, Kewei Tu, Pengjun Xie, Fei Huang, Yong Jiang
Also, we discover that the limited context length causes the retrieval knowledge to be invisible to the model.
Multilingual Named Entity Recognition named-entity-recognition +4
1 code implementation • NeurIPS 2023 • Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang
Solving complicated AI tasks with different domains and modalities is a key step toward artificial general intelligence.
1 code implementation • 26 Dec 2022 • Yechun Tang, Xiaoxia Cheng, Weiming Lu
Complex knowledge base question answering can be achieved by converting questions into sequences of predefined actions.
no code implementations • 3 Nov 2022 • Zeqi Tan, Yongliang Shen, Xuming Hu, Wenqi Zhang, Xiaoxia Cheng, Weiming Lu, Yueting Zhuang
Joint entity and relation extraction has been a core task in the field of information extraction.
Contrastive Learning Joint Entity and Relation Extraction +1
1 code implementation • 21 Oct 2022 • Wenqi Zhang, Yongliang Shen, Yanna Ma, Xiaoxia Cheng, Zeqi Tan, Qingpeng Nong, Weiming Lu
Math word problem solver requires both precise relation reasoning about quantities in the text and reliable generation for the diverse equation.
Ranked #1 on Math Word Problem Solving on Math23K (using extra training data)
no code implementations • 2 Oct 2022 • Chang Zong, Yueting Zhuang, Weiming Lu, Jian Shao, Siliang Tang
In this paper, we propose CTPIR, a new citation trajectory prediction framework that is able to represent the influence (the momentum of citation) of either new or existing publications using the history information of all their attributes.
1 code implementation • 24 Aug 2022 • Xinyu Zhu, Yongliang Shen, Weiming Lu
Concomitant administration of drugs can cause drug-drug interactions (DDIs).
no code implementations • 16 May 2022 • Xinyin Ma, Xinchao Wang, Gongfan Fang, Yongliang Shen, Weiming Lu
Data-free knowledge distillation (DFKD) conducts knowledge distillation via eliminating the dependence of original training data, and has recently achieved impressive results in accelerating pre-trained language models.
1 code implementation • 27 Apr 2022 • Shuhui Wu, Yongliang Shen, Zeqi Tan, Weiming Lu
In the refine stage, proposals interact with each other, and richer contextual information is incorporated into the proposal representations.
1 code implementation • ACL 2022 • Yongliang Shen, Xiaobin Wang, Zeqi Tan, Guangwei Xu, Pengjun Xie, Fei Huang, Weiming Lu, Yueting Zhuang
Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel.
Ranked #1 on Nested Named Entity Recognition on GENIA
Chinese Named Entity Recognition named-entity-recognition +5
1 code implementation • SemEval (NAACL) 2022 • Xinyu Wang, Yongliang Shen, Jiong Cai, Tao Wang, Xiaobin Wang, Pengjun Xie, Fei Huang, Weiming Lu, Yueting Zhuang, Kewei Tu, Wei Lu, Yong Jiang
Our system wins 10 out of 13 tracks in the MultiCoNER shared task.
Multilingual Named Entity Recognition Named Entity Recognition +1
1 code implementation • EMNLP 2021 • Xinyin Ma, Yong Jiang, Nguyen Bach, Tao Wang, Zhongqiang Huang, Fei Huang, Weiming Lu
Entity retrieval, which aims at disambiguating mentions to canonical entities from massive KBs, is essential for many tasks in natural language processing.
Ranked #1 on Entity Retrieval on ZESHEL
no code implementations • NAACL 2021 • Chenghao Jia, Yongliang Shen, Yechun Tang, Lu Sun, Weiming Lu
Prerequisite relations among concepts are crucial for educational applications, such as curriculum planning and intelligent tutoring.
1 code implementation • 19 May 2021 • Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, Yueting Zhuang
We utilize a non-autoregressive decoder to predict the final set of entities in one pass, in which we are able to capture dependencies between entities.
Ranked #6 on Nested Named Entity Recognition on ACE 2005
1 code implementation • ACL 2021 • Yongliang Shen, Xinyin Ma, Zeqi Tan, Shuai Zhang, Wen Wang, Weiming Lu
Although these methods have the innate ability to handle nested NER, they suffer from high computational cost, ignorance of boundary information, under-utilization of the spans that partially match with entities, and difficulties in long entity recognition.
Ranked #6 on Nested Named Entity Recognition on GENIA
Chinese Named Entity Recognition named-entity-recognition +3
1 code implementation • 25 Jan 2021 • Yongliang Shen, Xinyin Ma, Yechun Tang, Weiming Lu
Joint entity and relation extraction framework constructs a unified model to perform entity recognition and relation extraction simultaneously, which can exploit the dependency between the two tasks to mitigate the error propagation problem suffered by the pipeline model.
Ranked #1 on Relation Extraction on CoNLL04 (NER Micro F1 metric)
Joint Entity and Relation Extraction Reading Comprehension +2
no code implementations • Findings of the Association for Computational Linguistics 2020 • Jiale Yu, Yongliang Shen, Xinyin Ma, Chenghao Jia, Chen Chen, Weiming Lu
Extensive experiments on a real-world dataset show the effectiveness of our approach.
no code implementations • EMNLP 2020 • Xinyin Ma, Yongliang Shen, Gongfan Fang, Chen Chen, Chenghao Jia, Weiming Lu
To the best of our knowledge, our framework is the first data-free distillation framework designed for NLP tasks.
no code implementations • 11 Jun 2020 • Zeyun Tang, Yongliang Shen, Xinyin Ma, Wei Xu, Jiale Yu, Weiming Lu
Meanwhile, we propose Gated-RGCN to accumulate evidence on the path-based reasoning graph, which contains a new question-aware gating mechanism to regulate the usefulness of information propagating across documents and add question information during reasoning.