no code implementations • 3 Apr 2024 • Wanyun Cui, Qianle Wang
We find that a small subset of ``cherry'' parameters exhibit a disproportionately large influence on model performance, while the vast majority of parameters have minimal impact.
no code implementations • 12 Oct 2023 • Wanyun Cui, Linqiu Zhang, Qianle Wang, Shuyang Cai
Addressing these challenges, this paper introduces SAID (Social media AI Detection), a novel benchmark developed to assess AI-text detection models' capabilities in real social media platforms.
1 code implementation • 6 Oct 2023 • Wanyun Cui, Qianle Wang
Generating diverse and sophisticated instructions for downstream tasks by Large Language Models (LLMs) is pivotal for advancing the effect.
no code implementations • 5 Jul 2023 • Shuyang Cai, Wanyun Cui
Existing detectors are built upon the assumption that there are distributional gaps between human-generated and AI-generated text.
no code implementations • 24 May 2023 • Wanyun Cui, Xingran Chen
Previous research has demonstrated that natural language explanations provide valuable inductive biases that guide models, thereby improving the generalization ability and data efficiency.
no code implementations • 24 May 2023 • Wanyun Cui, Xingran Chen
One key observation is that the upper bound of batch partitioning can be reduced to the classic {\it graph k-cut problem}.
1 code implementation • 13 Nov 2022 • Wanyun Cui, Xingran Chen
In this paper, we propose a new method for knowledge base completion (KBC): instance-based learning (IBL).
2 code implementations • NeurIPS 2021 • Wanyun Cui, Xingran Chen
One weakness of the previous rule induction systems is that they only find rules within a knowledge base (KB) and therefore cannot generalize to more open and complex real-world rules.
1 code implementation • Findings (ACL) 2022 • Wanyun Cui, Xingran Chen
In this paper, we propose to use large-scale out-of-domain commonsense to enhance text representation.
no code implementations • 3 Jul 2021 • Wanyun Cui, Sen Yan
However, we found critical order violations between hard labels and soft labels in augmented samples.
1 code implementation • EMNLP 2020 • Wanyun Cui, Guangyu Zheng, Wei Wang
MACD forces the decoupled text encoder to represent the visual information via contrastive learning.
no code implementations • 25 Sep 2019 • Wanyun Cui
Besides representing the words of the sentence, we introduce hypernodes to represent the candidate phrases in attention.
no code implementations • 25 Sep 2019 • Wanyun Cui, Guangyu Zheng, Wei Wang
In natural language inference, the semantics of some words do not affect the inference.
no code implementations • 22 Aug 2019 • Zhiqiang Shen, Zhankui He, Wanyun Cui, Jiahui Yu, Yutong Zheng, Chenchen Zhu, Marios Savvides
In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously.
no code implementations • 6 Mar 2019 • Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, Wei Wang
Based on these templates, our QA system KBQA effectively supports binary factoid questions, as well as complex questions which are composed of a series of binary factoid questions.
no code implementations • ICLR 2019 • Wanyun Cui, Guangyu Zheng, Zhiqiang Shen, Sihang Jiang, Wei Wang
Transfer learning aims to solve the data sparsity for a target domain by applying information of the source domain.
no code implementations • 20 Oct 2017 • Wanyun Cui, Xiyou Zhou, Hangyu Lin, Yanghua Xiao, Haixun Wang, Seung-won Hwang, Wei Wang
In this paper, we introduce verb patterns to represent verbs' semantics, such that each pattern corresponds to a single semantic of the verb.