no code implementations • 11 Jun 2024 • Shiao Meng, Xuming Hu, Aiwei Liu, Fukun Ma, Yawen Yang, Shuang Li, Lijie Wen
To this end, we systematically investigate the robustness of DocRE models to entity name variations in this work.
1 code implementation • 16 May 2024 • Leyi Pan, Aiwei Liu, Zhiwei He, Zitian Gao, Xuandong Zhao, Yijian Lu, Binglin Zhou, Shuliang Liu, Xuming Hu, Lijie Wen, Irwin King
However, the abundance of LLM watermarking algorithms, their intricate mechanisms, and the complex evaluation procedures and perspectives pose challenges for researchers and the community to easily experiment with, understand, and assess the latest advancements.
no code implementations • 19 Mar 2024 • Runwei Guan, Liye Jia, Fengyufan Yang, Shanliang Yao, Erick Purwanto, Xiaohui Zhu, Eng Gee Lim, Jeremy Smith, Ka Lok Man, Xuming Hu, Yutao Yue
The pattern of text-guided two sensors equips a finer granularity of text prompts with visual and radar features of referred targets.
1 code implementation • 7 Mar 2024 • Yangning Li, Qingsong Lv, Tianyu Yu, Yinghui Li, Shulin Huang, Tingwei Lu, Xuming Hu, Wenhao Jiang, Hai-Tao Zheng, Hui Wang
To solve this issue, we first introduce negative seed entities in the inputs, which belong to the same fine-grained semantic class as the positive seed entities but differ in certain attributes.
no code implementations • 26 Feb 2024 • Weize Liu, Yinlong Xu, Hongxia Xu, Jintai Chen, Xuming Hu, Jian Wu
Recently, large language models (LLMs) have achieved tremendous breakthroughs in the field of language processing, yet their mechanisms in processing multiple languages remain agnostic.
1 code implementation • 26 Feb 2024 • Junzhe Chen, Xuming Hu, Shuodi Liu, Shiyu Huang, Wei-Wei Tu, Zhaofeng He, Lijie Wen
Recent advancements in large language models (LLMs) have revealed their potential for achieving autonomous agents possessing human-level intelligence.
no code implementations • 25 Feb 2024 • Xuming Hu, Xiaochuan Li, Junzhe Chen, Yinghui Li, Yangning Li, Xiaoguang Li, Yasheng Wang, Qun Liu, Lijie Wen, Philip S. Yu, Zhijiang Guo
To this end, we propose evaluating the robustness of generative search engines in the realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning incorrect responses.
no code implementations • 18 Feb 2024 • Yinghui Li, Shang Qin, Jingheng Ye, Shirong Ma, Yangning Li, Libo Qin, Xuming Hu, Wenhao Jiang, Hai-Tao Zheng, Philip S. Yu
To promote the CGEC field to better adapt to the era of LLMs, we rethink the roles of LLMs in the CGEC task so that they can be better utilized and explored in CGEC.
1 code implementation • 16 Feb 2024 • Yinghui Li, Qingyu Zhou, Yuanzhen Luo, Shirong Ma, Yangning Li, Hai-Tao Zheng, Xuming Hu, Philip S. Yu
In this paper, we challenge the reasoning and understanding abilities of LLMs by proposing a FaLlacy Understanding Benchmark (FLUB) containing cunning texts that are easy for humans to understand but difficult for models to grasp.
no code implementations • 13 Dec 2023 • Aiwei Liu, Leyi Pan, Yijian Lu, Jingjing Li, Xuming Hu, Xi Zhang, Lijie Wen, Irwin King, Hui Xiong, Philip S. Yu
Text watermarking algorithms play a crucial role in the copyright protection of textual content, yet their capabilities and application scenarios have been limited historically.
1 code implementation • 15 Nov 2023 • Weize Liu, Guocong Li, Kai Zhang, Bang Du, Qiyuan Chen, Xuming Hu, Hongxia Xu, Jintai Chen, Jian Wu
While techniques such as chain-of-thought (CoT) distillation have displayed promise in distilling LLMs into small language models (SLMs), there is a risk that distilled SLMs may still inherit flawed reasoning and hallucinations from LLMs.
1 code implementation • 25 Oct 2023 • Xuming Hu, Junzhe Chen, Aiwei Liu, Shiao Meng, Lijie Wen, Philip S. Yu
Additionally, our method is orthogonal to previous multimodal fusions, and using it on prior SOTA fusions further improves 5. 47% F1.
1 code implementation • 24 Oct 2023 • Shiao Meng, Xuming Hu, Aiwei Liu, Shu'ang Li, Fukun Ma, Yawen Yang, Lijie Wen
However, existing works often struggle to obtain class prototypes with accurate relational semantics: 1) To build prototype for a target relation type, they aggregate the representations of all entity pairs holding that relation, while these entity pairs may also hold other relations, thus disturbing the prototype.
1 code implementation • 11 Oct 2023 • Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang, Linyi Yang, Jindong Wang, Xing Xie, Zheng Zhang, Yue Zhang
This survey addresses the crucial issue of factuality in Large Language Models (LLMs).
2 code implementations • 10 Oct 2023 • Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, Lijie Wen
In this work, we propose a semantic invariant watermarking method for LLMs that provides both attack robustness and security robustness.
no code implementations • 8 Oct 2023 • Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, Philip S. Yu, Zhijiang Guo
Large language models (LLMs) have recently driven striking performance improvements across a range of natural language processing tasks.
3 code implementations • 30 Jul 2023 • Aiwei Liu, Leyi Pan, Xuming Hu, Shu'ang Li, Lijie Wen, Irwin King, Philip S. Yu
Experiments demonstrate that our algorithm attains high detection accuracy and computational efficiency through neural networks.
no code implementations • 29 May 2023 • Aiwei Liu, Wei Liu, Xuming Hu, Shuang Li, Fukun Ma, Yawen Yang, Lijie Wen
Based on these observations, we propose a method named \texttt{p-align} to improve the compositional generalization of Text-to-SQL models.
no code implementations • 26 May 2023 • Xuming Hu, Aiwei Liu, Zeqi Tan, Xin Zhang, Chenwei Zhang, Irwin King, Philip S. Yu
These techniques neither preserve the semantic consistency of the original sentences when rule-based augmentations are adopted, nor preserve the syntax structure of sentences when expressing relations using seq2seq models, resulting in less diverse augmentations.
no code implementations • 25 May 2023 • Xuming Hu, Junzhe Chen, Zhijiang Guo, Philip S. Yu
Evidence plays a crucial role in automated fact-checking.
no code implementations • 25 May 2023 • Xuming Hu, Zhijiang Guo, Zhiyang Teng, Irwin King, Philip S. Yu
Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair.
1 code implementation • 22 May 2023 • Shuang Li, Xuming Hu, Aiwei Liu, Yawen Yang, Fukun Ma, Philip S. Yu, Lijie Wen
In this paper, we propose a novel Soft prompt learning framework with the Multilingual Verbalizer (SoftMV) for XNLI.
Cross-Lingual Natural Language Inference Cross-Lingual Transfer
no code implementations • 12 May 2023 • Yawen Yang, Xuming Hu, Fukun Ma, Shu'ang Li, Aiwei Liu, Lijie Wen, Philip S. Yu
Existing works for nested NER ignore the recognition order and boundary position relation of nested entities.
1 code implementation • 2 May 2023 • Xuming Hu, Zhaochen Hong, Zhijiang Guo, Lijie Wen, Philip S. Yu
In light of this, we propose a fact verification model named ReRead to retrieve evidence and verify claim that: (1) Train the evidence retriever to obtain interpretable evidence (i. e., faithfulness and plausibility criteria); (2) Train the claim verifier to revisit the evidence retrieved by the optimized evidence retriever to improve the accuracy.
1 code implementation • 2 May 2023 • Xuming Hu, Zhaochen Hong, Chenwei Zhang, Irwin King, Philip S. Yu
Relation extraction (RE) aims to extract potential relations according to the context of two entities, thus, deriving rational contexts from sentences plays an important role.
1 code implementation • 12 Mar 2023 • Aiwei Liu, Xuming Hu, Lijie Wen, Philip S. Yu
This paper presents the first comprehensive analysis of ChatGPT's Text-to-SQL ability.
no code implementations • 11 Nov 2022 • Xuming Hu, Shiao Meng, Chenwei Zhang, Xiangli Yang, Lijie Wen, Irwin King, Philip S. Yu
Low-Resource Information Extraction (LRIE) strives to use unsupervised data, reducing the required resources and human annotation.
no code implementations • 3 Nov 2022 • Zeqi Tan, Yongliang Shen, Xuming Hu, Wenqi Zhang, Xiaoxia Cheng, Weiming Lu, Yueting Zhuang
Joint entity and relation extraction has been a core task in the field of information extraction.
Contrastive Learning Joint Entity and Relation Extraction +1
1 code implementation • 31 Oct 2022 • Aiwei Liu, Honghai Yu, Xuming Hu, Shu'ang Li, Li Lin, Fukun Ma, Yawen Yang, Lijie Wen
We propose the first character-level white-box adversarial attack method against transformer models.
no code implementations • 19 Oct 2022 • Xuming Hu, Yong Jiang, Aiwei Liu, Zhongqiang Huang, Pengjun Xie, Fei Huang, Lijie Wen, Philip S. Yu
Data augmentation techniques have been used to alleviate the problem of scarce labeled data in various NER tasks (flat, nested, and discontinuous NER tasks).
no code implementations • COLING 2022 • Xuming Hu, Zhijiang Guo, Yu Fu, Lijie Wen, Philip S. Yu
A scene graph is a semantic representation that expresses the objects, attributes, and relationships between objects in a scene.
1 code implementation • COLING 2022 • Xin Zhang, Yong Jiang, Xiaobin Wang, Xuming Hu, Yueheng Sun, Pengjun Xie, Meishan Zhang
Successful Machine Learning based Named Entity Recognition models could fail on texts from some special domains, for instance, Chinese addresses and e-commerce titles, where requires adequate background knowledge.
1 code implementation • 8 Aug 2022 • Aiwei Liu, Xuming Hu, Li Lin, Lijie Wen
First, we extract a schema linking graph from PLMs through a probing procedure in an unsupervised manner.
1 code implementation • 25 Jun 2022 • Yueen Ma, Zixing Song, Xuming Hu, Jingjing Li, Yifei Zhang, Irwin King
As it is intractable for data augmentation to fully capture the structural information of the ConcreteGraph due to a large amount of potential concept pairs, we further introduce a novel Graph Component Contrastive Learning framework to implicitly learn the complete structure of the ConcreteGraph.
1 code implementation • NAACL 2022 • Xuming Hu, Zhijiang Guo, Guanyu Wu, Aiwei Liu, Lijie Wen, Philip S. Yu
The explosion of misinformation spreading in the media ecosystem urges for automated fact-checking.
no code implementations • 31 May 2022 • Shu'ang Li, Xuming Hu, Li Lin, Aiwei Liu, Lijie Wen, Philip S. Yu
Natural Language Inference (NLI) is a growingly essential task in natural language understanding, which requires inferring the relationship between the sentence pairs (premise and hypothesis).
1 code implementation • NAACL 2022 • Xuming Hu, Shuliang Liu, Chenwei Zhang, Shu`ang Li, Lijie Wen, Philip S. Yu
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
no code implementations • 5 Feb 2022 • Xiaohe Li, Lijie Wen, Yawen Deng, Fuli Feng, Xuming Hu, Lei Wang, Zide Fan
Graph Neural Network (GNN) is an emerging technique for graph-based learning tasks such as node classification.
no code implementations • 26 Jan 2022 • Shu'ang Li, Xuming Hu, Li Lin, Lijie Wen
We adopt a cross attention module to learn the joint representations of the sentence pairs.
no code implementations • 18 Jan 2022 • Li Lin, Yixin Cao, Lifu Huang, Shu'ang Li, Xuming Hu, Lijie Wen, Jianmin Wang
To alleviate the knowledge forgetting issue, we design two modules, Im and Gm, for each type of knowledge, which are combined via prompt tuning.
1 code implementation • EMNLP 2021 • Xuming Hu, Chenwei Zhang, Yawen Yang, Xiaohe Li, Li Lin, Lijie Wen, Philip S. Yu
Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce.
1 code implementation • Findings (EMNLP) 2021 • Xuming Hu, Chenwei Zhang, Fukun Ma, Chenyao Liu, Lijie Wen, Philip S. Yu
To alleviate human efforts from obtaining large-scale annotations, Semi-Supervised Relation Extraction methods aim to leverage unlabeled data in addition to learning from limited samples.
1 code implementation • EMNLP 2020 • Xuming Hu, Chenwei Zhang, Yusong Xu, Lijie Wen, Philip S. Yu
Open relation extraction is the task of extracting open-domain relation facts from natural language sentences.