no code implementations • COLING 2022 • Lisung Chen, Nuo Chen, Yuexian Zou, Yong Wang, Xinzhong Sun
Furthermore, we propose a threshold-free intent multi-intent classifier that utilizes the output of IND task and detects the multiple intents without depending on the threshold.
1 code implementation • 6 May 2024 • Qijiong Liu, Xiaoyu Dong, Jiaren Xiao, Nuo Chen, Hengchang Hu, Jieming Zhu, Chenxu Zhu, Tetsuya Sakai, Xiao-Ming Wu
Finally, the survey analyzes the remaining challenges and anticipates future trends in VQ4Rec, including the challenges associated with the training of vector quantization, the opportunities presented by large language models, and emerging trends in multimodal recommender systems.
no code implementations • 11 Apr 2024 • Jiayi Wu, Renyu Zhu, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
Over the past few years, we have witnessed remarkable advancements in Code Pre-trained Models (CodePTMs).
no code implementations • 27 Mar 2024 • Nuo Chen, Jiqun Liu, Hanpei Fang, Yuankai Luo, Tetsuya Sakai, Xiao-Ming Wu
This study examines the decoy effect's underexplored influence on user search interactions and methods for measuring information retrieval (IR) systems' vulnerability to this effect.
1 code implementation • 6 Mar 2024 • Xidong Wang, Nuo Chen, Junyin Chen, Yan Hu, Yidong Wang, Xiangbo Wu, Anningzhe Gao, Xiang Wan, Haizhou Li, Benyou Wang
Despite the vast repository of global medical knowledge predominantly being in English, local languages are crucial for delivering tailored healthcare services, particularly in areas with limited medical resources.
no code implementations • 25 Feb 2024 • Nuo Chen, Yuhan Li, Jianheng Tang, Jia Li
Large language models (LLMs) have achieved impressive success across several fields, but their proficiency in understanding and resolving complex graph problems is less explored.
1 code implementation • 19 Feb 2024 • Nuo Chen, Hongguang Li, Juhua Huang, Baoyuan Wang, Jia Li
Existing retrieval-based methods have made significant strides in maintaining long-term conversations.
no code implementations • 18 Dec 2023 • Nuo Chen, Hongguang Li, Baoyuan Wang, Jia Li
IMP-TIP follows the ``From Good to Great" concept, collecting multiple potential solutions from both LLMs and their Tool-Augmented counterparts for the same math problem, and then selecting or re-generating the most accurate answer after cross-checking these solutions via tool-augmented interleaf prompting.
1 code implementation • 7 Dec 2023 • Nuo Chen, Ning Wu, Shining Liang, Ming Gong, Linjun Shou, Dongmei Zhang, Jia Li
This paper presents an in-depth analysis of Large Language Models (LLMs), focusing on LLaMA, a prominent open-source foundational model in natural language processing.
no code implementations • 4 Dec 2023 • Ping Zhou, Nuo Chen, Yuda Xu, Chengcai Xu
The light field imaging in restrictive object space (ROS-LF) is complicated but significant.
no code implementations • 4 Nov 2023 • Nuo Chen, Jiqun Liu, Tetsuya Sakai, Xiao-Ming Wu
In recent years, the influence of cognitive effects and biases on users' thinking, behaving, and decision-making has garnered increasing attention in the field of interactive information retrieval.
1 code implementation • 31 Oct 2023 • Nuo Chen, Zinan Zheng, Ning Wu, Ming Gong, Yangqiu Song, Dongmei Zhang, Jia Li
This indicates that crafting multilingual corpora can be regarded as a vital strategy for enhancing model performance in a specific language, especially in mathematical reasoning tasks.
1 code implementation • 19 Oct 2023 • Jianing Wang, Qiushi Sun, Nuo Chen, Chengyu Wang, Jun Huang, Ming Gao, Xiang Li
The recent success of large pre-trained language models (PLMs) heavily hinges on massive labeled data, which typically produces inferior performance in low-resource scenarios.
no code implementations • 6 Jul 2023 • Nuo Chen, Tetsuya Sakai
In this study, we investigate the statistical stability of C/W/L/A metrics from the perspective of: (1) the system ranking similarity among aggregations, (2) the system ranking consistency of aggregations and (3) the discriminative power of aggregations.
1 code implementation • 15 Jun 2023 • Yiming Li, Sihang Li, Xinhao Liu, Moonjun Gong, Kenan Li, Nuo Chen, Zijun Wang, Zhiheng Li, Tao Jiang, Fisher Yu, Yue Wang, Hang Zhao, Zhiding Yu, Chen Feng
Monocular scene understanding is a foundational component of autonomous systems.
3D Semantic Scene Completion 3D Semantic Scene Completion from a single 2D image
no code implementations • 10 Jun 2023 • Jianing Wang, Qiushi Sun, Nuo Chen, Xiang Li, Ming Gao
To mitigate this brittleness, we propose a novel Chain-of-Knowledge (CoK) prompting, where we aim at eliciting LLMs to generate explicit pieces of knowledge evidence in the form of structure triple.
1 code implementation • 23 May 2023 • Qiushi Sun, Nuo Chen, Jianing Wang, Xiang Li, Ming Gao
To tackle the issue, in this paper, we present TransCoder, a unified Transferable fine-tuning strategy for Code representation learning.
no code implementations • 17 May 2023 • Chengcheng Han, Liqing Cui, Renyu Zhu, Jianing Wang, Nuo Chen, Qiushi Sun, Xiang Li, Ming Gao
In this paper, we introduce gradient descent into black-box tuning scenario through knowledge distillation.
1 code implementation • 14 May 2023 • Qiushi Sun, Chengcheng Han, Nuo Chen, Renyu Zhu, Jingyang Gong, Xiang Li, Ming Gao
Large language models (LLMs) have shown increasing power on various natural language processing (NLP) tasks.
3 code implementations • 11 May 2023 • Qijiong Liu, Nuo Chen, Tetsuya Sakai, Xiao-Ming Wu
Personalized content-based recommender systems have become indispensable tools for users to navigate through the vast amount of content available on platforms like daily news websites and book recommendation services.
1 code implementation • 9 May 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Bowen Cao, Jianhui Chang, Daxin Jiang, Jia Li
Currently, learning better unsupervised sentence representations is the pursuit of many natural language processing communities.
1 code implementation • CVPR 2023 • Xinyi Ying, Li Liu, Yingqian Wang, Ruojing Li, Nuo Chen, Zaiping Lin, Weidong Sheng, Shilin Zhou
Interestingly, during the training phase supervised by point labels, we discover that CNNs first learn to segment a cluster of pixels near the targets, and then gradually converge to predict groundtruth point labels.
no code implementations • 12 Mar 2023 • Tengtao Song, Nuo Chen, Ji Jiang, Zhihong Zhu, Yuexian Zou
Since incorporating syntactic information like dependency structures into neural models can promote a better understanding of the sentences, such a method has been widely used in NLP tasks.
2 code implementations • 28 Feb 2023 • Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios.
1 code implementation • 27 Feb 2023 • Nuo Chen, Hongguang Li, Junqing He, Yinan Bao, Xinshi Lin, Qi Yang, Jianfeng Liu, Ruyi Gan, Jiaxing Zhang, Baoyuan Wang, Jia Li
Thus, model's comprehension ability towards real scenarios are hard to evaluate reasonably.
1 code implementation • 23 Feb 2023 • Qichen Ye, Bowen Cao, Nuo Chen, Weiyuan Xu, Yuexian Zou
Despite the promising result of recent KAQA systems which tend to integrate linguistic knowledge from pre-trained language models (PLM) and factual knowledge from knowledge graphs (KG) to answer complex questions, a bottleneck exists in effectively fusing the representations from PLMs and KGs because of (i) the semantic and distributional gaps between them, and (ii) the difficulties in joint reasoning over the provided knowledge from both modalities.
1 code implementation • 17 Feb 2023 • Nuo Chen, Hongguang Li, Yinan Bao, Baoyuan Wang, Jia Li
To this end, we construct a new dataset called Penguin to promote the research of MRC, providing a training and test bed for natural response generation to real scenarios.
Chinese Reading Comprehension Machine Reading Comprehension +1
no code implementations • 16 Feb 2023 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Chenyu You, Jianhui Chang, Daxin Jiang, Jia Li
For instance, TPLMs jointly pre-trained with table and text input could be effective for tasks also with table-text joint input like table question answering, but it may fail for tasks with only tables or text as input such as table retrieval.
no code implementations • 12 Dec 2022 • Yang Liu, Yu Rong, Zhuoning Guo, Nuo Chen, Tingyang Xu, Fugee Tsung, Jia Li
To address these challenges, we formulate the micro perspective mobility modeling into computing the relevance score between a diffusion and a location, conditional on a geometric graph.
1 code implementation • 13 Nov 2022 • Nuo Chen, Yan Wang, Haiyun Jiang, Deng Cai, Yuhan Li, Ziyang Chen, Longyue Wang, Jia Li
In this paper, we introduce the Harry Potter Dialogue (HPD) dataset, designed to advance the study of dialogue agents and character alignment.
no code implementations • 19 Oct 2022 • Tetsuya Sakai, Sijie Tao, Maria Maistro, Zhumin Chu, Yujing Li, Nuo Chen, Nicola Ferro, Junjie Wang, Ian Soboroff, Yiqun Liu
The noise is due to a fatal bug in the backend of our relevance assessment interface.
1 code implementation • 7 Oct 2022 • Nuo Chen, Qiushi Sun, Renyu Zhu, Xiang Li, Xuesong Lu, Ming Gao
To interpret these models, some probing methods have been applied.
no code implementations • 18 Aug 2022 • Nuo Chen, Chenyu You
To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the coarse-grained representations of the source sequences, i. e., passage and question.
1 code implementation • 16 Jun 2022 • Ziqian Dai, Jianwei Yu, Yan Wang, Nuo Chen, Yanyao Bian, Guangzhi Li, Deng Cai, Dong Yu
Prosodic boundary plays an important role in text-to-speech synthesis (TTS) in terms of naturalness and readability.
no code implementations • Findings (NAACL) 2022 • Chenyu You, Nuo Chen, Fenglin Liu, Shen Ge, Xian Wu, Yuexian Zou
To evaluate the capacity of SCQA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 40k question-answer pairs from 4k conversations.
Ranked #1 on Spoken Language Understanding on Spoken-SQuAD
no code implementations • 23 Apr 2022 • Yushu Zhang, Nuo Chen, Shuren Qi, Mingfu Xue, Xiaochun Cao
In this paper, we try to explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
no code implementations • NAACL 2022 • Nuo Chen, Linjun Shou, Ming Gong, Jian Pei, Daxin Jiang
Large-scale cross-lingual pre-trained language models (xPLMs) have shown effectiveness in cross-lingual sequence labeling tasks (xSL), such as cross-lingual machine reading comprehension (xMRC) by transferring knowledge from a high-resource language to low-resource languages.
no code implementations • 9 Dec 2021 • Nuo Chen, Linjun Shou, Min Gong, Jian Pei, Daxin Jiang
Cross-lingual Machine Reading Comprehension (xMRC) is challenging due to the lack of training data in low-resource languages.
no code implementations • Findings (EMNLP) 2021 • Chenyu You, Nuo Chen, Yuexian Zou
In this paper, we propose novel training schemes for spoken question answering with a self-supervised training stage and a contrastive representation learning stage.
no code implementations • 15 Aug 2021 • Shichao Jia, Zeyu Li, Nuo Chen, Jiawan Zhang
This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems.
no code implementations • 12 Aug 2021 • Li Wang, Rongzhi Gu, Nuo Chen, Yuexian Zou
Recently proposed metric learning approaches improved the generalizability of models for the KWS task, and 1D-CNN based KWS models have achieved the state-of-the-arts (SOTA) in terms of model size.
no code implementations • 4 Jun 2021 • Nuo Chen, Chenyu You, Yuexian Zou
We also utilize the proposed self-supervised learning tasks to capture intra-sentence coherence.
no code implementations • 20 Dec 2020 • Nuo Chen, Fenglin Liu, Chenyu You, Peilin Zhou, Yuexian Zou
To predict the answer, it is common practice to employ a predictor to draw information only from the final encoder layer which generates the \textit{coarse-grained} representations of the source sequences, i. e., passage and question.
no code implementations • 1 Nov 2020 • Baihua Shi, Nuo Chen, Xicheng Zhu, Yuwen Qian, Yijin Zhang, Feng Shu, Jiangzhou Wang
In this paper, we present a new scenario of direction of arrival (DOA) estimation using massive multiple-input multiple-output (MIMO) receive array with low-resolution analog-to-digital convertors (ADCs), which can strike a good balance between performance and circuit cost.
Information Theory Signal Processing Information Theory
no code implementations • 21 Oct 2020 • Chenyu You, Nuo Chen, Yuexian Zou
However, the recent work shows that ASR systems generate highly noisy transcripts, which critically limit the capability of machine comprehension on the SQA task.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 21 Oct 2020 • Chenyu You, Nuo Chen, Yuexian Zou
Spoken conversational question answering (SCQA) requires machines to model complex dialogue flow given the speech utterances and text corpora.
Audio Signal Processing Conversational Question Answering +2
no code implementations • 18 Oct 2020 • Chenyu You, Nuo Chen, Fenglin Liu, Dongchao Yang, Yuexian Zou
In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2