1 code implementation • Findings (ACL) 2022 • Ziyi Shou, Yuxin Jiang, Fangzhen Lin
To evaluate the effectiveness of our method, we apply it to the tasks of semantic textual similarity (STS) and text classification.
1 code implementation • 19 Feb 2024 • Yuxin Jiang, YuFei Wang, Chuhan Wu, Wanjun Zhong, Xingshan Zeng, Jiahui Gao, Liangyou Li, Xin Jiang, Lifeng Shang, Ruiming Tang, Qun Liu, Wei Wang
Knowledge editing techniques, aiming to efficiently modify a minor proportion of knowledge in large language models (LLMs) without negatively impacting performance across other inputs, have garnered widespread attention.
1 code implementation • 30 Jan 2024 • Wai-Chung Kwan, Xingshan Zeng, Yuxin Jiang, YuFei Wang, Liangyou Li, Lifeng Shang, Xin Jiang, Qun Liu, Kam-Fai Wong
Large language models (LLMs) are increasingly relied upon for complex multi-turn conversations across diverse real-world applications.
no code implementations • 1 Dec 2023 • Yuxin Li, Qiang Han, Mengying Yu, Yuxin Jiang, Chaikiat Yeo, Yiheng Li, Zihang Huang, Nini Liu, Hsuanhan Chen, XiaoJun Wu
3D object detection in Bird's-Eye-View (BEV) space has recently emerged as a prevalent approach in the field of autonomous driving.
1 code implementation • 31 Oct 2023 • Yuxin Jiang, YuFei Wang, Xingshan Zeng, Wanjun Zhong, Liangyou Li, Fei Mi, Lifeng Shang, Xin Jiang, Qun Liu, Wei Wang
To fill this research gap, in this paper, we propose FollowBench, a Multi-level Fine-grained Constraints Following Benchmark for LLMs.
1 code implementation • ICCV 2023 • Yuxin Jiang, Liming Jiang, Shuai Yang, Chen Change Loy
The challenges of this task lie in the complexity of the scenes, the unique features of anime style, and the lack of high-quality datasets to bridge the domain gap.
no code implementations • 24 May 2023 • Linhan Zhang, Qian Chen, Wen Wang, Yuxin Jiang, Bing Li, Wei Wang, Xin Cao
In this paper, we carefully design a new task called Multiple Definition Modeling (MDM) that pool together all contexts and definition of target words.
1 code implementation • 22 May 2023 • Yuxin Jiang, Chunkit Chan, Mingyang Chen, Wei Wang
The practice of transferring knowledge from a sophisticated, proprietary large language model (LLM) to a compact, open-source LLM has garnered considerable attention.
no code implementations • 28 Apr 2023 • Chunkit Chan, Jiayang Cheng, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, Yangqiu Song
This paper aims to quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations such as temporal relations, causal relations, and discourse relations.
no code implementations • 28 Feb 2023 • Linhan Zhang, Qian Chen, Wen Wang, Chong Deng, Xin Cao, Kongzhang Hao, Yuxin Jiang, Wei Wang
Experiments on the Semantic Textual Similarity benchmark (STS) show that WSBERT significantly improves sentence embeddings over BERT.
1 code implementation • 25 Nov 2022 • Yuxin Jiang, Linhan Zhang, Wei Wang
Due to the absence of explicit connectives, implicit discourse relation recognition (IDRR) remains a challenging task in discourse analysis.
1 code implementation • 14 Mar 2022 • Yuxin Jiang, Linhan Zhang, Wei Wang
To this end, we propose to integrate an Energy-based Hinge loss to enhance the pairwise discriminative power, inspired by the connection between the NT-Xent loss and the Energy-based Learning paradigm.
Ranked #1 on Semantic Textual Similarity on CxC
no code implementations • 3 Dec 2021 • Zhiyuan Liu, Chuanzheng Sun, Yuxin Jiang, Shiqi Jiang, Mei Ming
An Internet meme commonly takes the form of an image and is created by combining a meme template (image) and a caption (natural language sentence).
no code implementations • 3 Jul 2021 • Zhenyu Yuan, Yuxin Jiang, Jingjing Li, Handong Huang
From prestack seismic gathers, anisotropic analysis and inversion were commonly applied to characterize the dominant orientations and relative intensities of fractures.
1 code implementation • SEMEVAL 2021 • Yuxin Jiang, Ziyi Shou, Qijun Wang, Hao Wu, Fangzhen Lin
This paper presents our submitted system to SemEval 2021 Task 4: Reading Comprehension of Abstract Meaning.
no code implementations • 18 May 2020 • Zhenyu Yuan, Yuxin Jiang, Jingjing Li, Handong Huang
Regarding as a combination of feature learning and target learning, the new proposed networks provide great capacity in high-hierarchy feature extraction and in-depth data mining.