no code implementations • ICLR 2019 • Pengfei Liu, Xuanjing Huang
In this paper, we describe a general framework to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.
1 code implementation • Findings (EMNLP) 2021 • Yiran Chen, PengFei Liu, Xipeng Qiu
In this paper, we present an adversarial meta-evaluation methodology that allows us to (i) diagnose the fine-grained strengths and weaknesses of 6 existing top-performing metrics over 24 diagnostic test datasets, (ii) search for directions for further improvement by data augmentation.
no code implementations • 1 Jun 2024 • Shichao Sun, Ruifeng Yuan, Ziqiang Cao, Wenjie Li, PengFei Liu
Two strategies are designed to perform this iterative process: Prompt Chaining and Stepwise Prompt.
1 code implementation • 23 May 2024 • Xiangkun Hu, Dongyu Ru, Lin Qiu, Qipeng Guo, Tianhang Zhang, Yang Xu, Yun Luo, PengFei Liu, Yue Zhang, Zheng Zhang
In RefChecker, an extractor generates claim-triplets from a response, which are then evaluated by a checker against a reference.
1 code implementation • 29 Apr 2024 • Ruijie Xu, Zengzhi Wang, Run-Ze Fan, PengFei Liu
By analyzing 31 LLMs under the context of mathematical reasoning, we reveal substantial instances of training even test set misuse, resulting in potentially unfair comparisons.
no code implementations • 22 Apr 2024 • Chengrui Wang, PengFei Liu, Min Zhou, Ming Zeng, Xubin Li, Tiezheng Ge, Bo Zheng
The style guidance is a hand image, e. g., the malformed hand itself, and is employed to furnish the style reference for hand refining.
1 code implementation • 15 Apr 2024 • PengFei Liu, Jun Tao, Zhixiang Ren
The task of chemical reaction predictions (CRPs) plays a pivotal role in advancing drug discovery and material science.
Ranked #1 on Chemical Reaction Prediction on Mol-Instruction
2 code implementations • 8 Apr 2024 • Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, PengFei Liu
To measure reasoning beyond final-answer accuracy, we introduce ReasonEval, a new methodology for evaluating the quality of reasoning steps.
1 code implementation • 31 Mar 2024 • Yiqing Xie, Alex Xie, Divyanshu Sheth, PengFei Liu, Daniel Fried, Carolyn Rose
To demonstrate the complexity and solvability of examples in Exec-CSN, we present a human study demonstrating that 81. 3% of the examples can be solved by humans and 61% are rated as "requires effort to solve".
1 code implementation • 2 Mar 2024 • Weizhe Yuan, PengFei Liu, Matthias Gallé
In particular, we present a model-in-the-loop framework that semi-automatically derives criteria from collected guidelines for different writing tasks and constructs in-context demonstrations for each criterion.
1 code implementation • 19 Feb 2024 • Run-Ze Fan, Xuefeng Li, Haoyang Zou, Junlong Li, Shwai He, Ethan Chern, Jiewen Hu, PengFei Liu
This paper explores elevating the quality of existing instruction data to better align with human values, introducing a simple and effective approach named ReAlign, which reformats the responses of instruction data into a format that better aligns with pre-established criteria and the collated evidence.
1 code implementation • 17 Feb 2024 • Junlong Li, Fan Zhou, Shichao Sun, Yikai Zhang, Hai Zhao, PengFei Liu
As a relative quality comparison of model responses, human and Large Language Model (LLM) preferences serve as common alignment goals in model fine-tuning and criteria in evaluation.
no code implementations • 14 Feb 2024 • Jiancheng Yang, Rui Shi, Liang Jin, Xiaoyang Huang, Kaiming Kuang, Donglai Wei, Shixuan Gu, Jianying Liu, PengFei Liu, Zhizhong Chai, Yongjie Xiao, Hao Chen, Liming Xu, Bang Du, Xiangyi Yan, Hao Tang, Adam Alessio, Gregory Holste, Jiapeng Zhang, Xiaoming Wang, Jianye He, Lixuan Che, Hanspeter Pfister, Ming Li, Bingbing Ni
The resulting FracNet+ demonstrates competitive performance in rib fracture detection, which lays a foundation for further research and development in AI-assisted rib fracture detection and diagnosis.
no code implementations • 11 Feb 2024 • Taojie Kuang, PengFei Liu, Zhixiang Ren
The precise prediction of molecular properties is essential for advancements in drug development, particularly in virtual screening and compound optimization.
1 code implementation • 6 Feb 2024 • PengFei Liu, Jun Tao, Zhixiang Ren
Efficient molecular modeling and design are crucial for the discovery and exploration of novel molecules, and the incorporation of deep learning methods has revolutionized this field.
1 code implementation • 30 Jan 2024 • Steffi Chern, Ethan Chern, Graham Neubig, PengFei Liu
Despite the utility of Large Language Models (LLMs) across a wide range of tasks and scenarios, developing a method for reliably evaluating LLMs across varied contexts continues to be challenging.
1 code implementation • 13 Jan 2024 • Yikai Zhang, Junlong Li, PengFei Liu
Large Language Models (LLMs) are known to have limited extrapolation ability beyond their pre-trained context window, constraining their application in downstream tasks with lengthy inputs.
1 code implementation • 9 Jan 2024 • Shichao Sun, Junlong Li, Weizhe Yuan, Ruifeng Yuan, Wenjie Li, PengFei Liu
Critique, as a natural language description for assessing the quality of model-generated content, has played a vital role in the training, evaluation, and refinement of LLMs.
1 code implementation • 7 Jan 2024 • Yiwei Qin, Kaiqiang Song, Yebowen Hu, Wenlin Yao, Sangwoo Cho, Xiaoyang Wang, Xuansheng Wu, Fei Liu, PengFei Liu, Dong Yu
This paper introduces the Decomposed Requirements Following Ratio (DRFR), a new metric for evaluating Large Language Models' (LLMs) ability to follow instructions.
1 code implementation • 28 Dec 2023 • Yang Xiao, Yi Cheng, Jinlan Fu, Jiashuo Wang, Wenjie Li, PengFei Liu
Human behavior simulation of AI agents necessitates the agents to possess a quality of believability, which is crucial as it facilitates users in establishing trust toward the agents and streamlines the fulfillment of the agents' goal.
1 code implementation • 28 Dec 2023 • Zengzhi Wang, Rui Xia, PengFei Liu
Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, language identification, cleaning, filtering, and deduplication, ensuring the high quality of our corpus.
1 code implementation • 26 Dec 2023 • Chunpu Xu, Steffi Chern, Ethan Chern, Ge Zhang, Zekun Wang, Ruibo Liu, Jing Li, Jie Fu, PengFei Liu
In this paper, we aim to align large language models with the ever-changing, complex, and diverse human values (e. g., social norms) across time and locations.
1 code implementation • 12 Dec 2023 • Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neubig, PengFei Liu
Recent research has made significant strides in applying alignment techniques to enhance the helpfulness and harmlessness of large language models (LLMs) in accordance with human intentions.
1 code implementation • 16 Nov 2023 • Yiqing Xie, Sheng Zhang, Hao Cheng, PengFei Liu, Zelalem Gero, Cliff Wong, Tristan Naumann, Hoifung Poon, Carolyn Rose
Medical text generation aims to assist with administrative work and highlight salient information to support decision-making.
1 code implementation • 15 Nov 2023 • Yixin Liu, Alexander R. Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, PengFei Liu, Dragomir Radev, Chien-Sheng Wu, Arman Cohan
Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) all LLM-based evaluation methods cannot achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation.
no code implementations • 16 Oct 2023 • Haotian Zhou, Tingkai Liu, Qianli Ma, Jianbo Yuan, PengFei Liu, Yang You, Hongxia Yang
In this paper, we introduce a new dimension in SFT data selection: learnability.
no code implementations • 16 Oct 2023 • Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, PengFei Liu, Yang You, Hongxia Yang
Recent years have seen considerable advancements in multi-step reasoning with Large Language Models (LLMs).
1 code implementation • 9 Oct 2023 • Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, PengFei Liu
The rapid development of Large Language Models (LLMs) has substantially expanded the range of tasks they can address.
1 code implementation • NeurIPS 2023 • Shiqi Chen, Yiran Zhao, Jinghan Zhang, I-Chun Chern, Siyang Gao, PengFei Liu, Junxian He
In this benchmark, we collect responses generated from LLMs and annotate factuality labels in a fine-grained manner.
1 code implementation • 23 Sep 2023 • PengFei Liu, Weibo Wang, Yuhan Guo, Jiubin Tan
Distinctly, for alleviating the inconsistency of classification score and localization quality during training and inference, under which some predictions with low classification scores but high LQE scores will impair the performance, instead of separately and independently setting, we embedded LQE branch into classification branch, producing a joint classification-localization-quality representation.
no code implementations • 31 Aug 2023 • Yufei Li, Lingling Hou, PengFei Liu
We quantitatively assess the impacts of Downgrading Protected Areas (PAD) on biodiversity in the U. S..
1 code implementation • 14 Aug 2023 • PengFei Liu, Yiming Ren, Jun Tao, Zhixiang Ren
Large language models have made significant strides in natural language processing, enabling innovative applications in molecular science by processing textual representations of molecules.
Ranked #1 on Image Captioning on ChEBI-20
5 code implementations • 25 Jul 2023 • I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, PengFei Liu
With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e. g., ChatGPT).
no code implementations • 10 Jul 2023 • I-Chun Chern, Zhiruo Wang, Sanjan Das, Bhavuk Sharma, PengFei Liu, Graham Neubig
Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information.
1 code implementation • 1 Jun 2023 • Sameer Jain, Vaishakh Keshava, Swarnashree Mysore Sathyendra, Patrick Fernandes, PengFei Liu, Graham Neubig, Chunting Zhou
Most frameworks that perform such multi-dimensional evaluation require training on large manually or synthetically generated datasets.
1 code implementation • 26 May 2023 • Vijay Viswanathan, Luyu Gao, Tongshuang Wu, PengFei Liu, Graham Neubig
Using this data, we compare various information retrieval algorithms on our test set and present a superior bi-encoder retriever for text-based dataset recommendation.
no code implementations • 24 May 2023 • Yueqi Song, Catherine Cui, Simran Khanuja, PengFei Liu, Fahim Faisal, Alissa Ostapenko, Genta Indra Winata, Alham Fikri Aji, Samuel Cahyawijaya, Yulia Tsvetkov, Antonios Anastasopoulos, Graham Neubig
Despite the major advances in NLP, significant disparities in NLP system performance across languages still exist.
1 code implementation • 23 May 2023 • Yixin Liu, Kejian Shi, Katherine S He, Longtian Ye, Alexander R. Fabbri, PengFei Liu, Dragomir Radev, Arman Cohan
Meanwhile, we perform a meta-analysis on this new learning setting that reveals a discrepancy between human and LLM-based evaluation, highlighting the benefits and risks of this LLM-as-reference setting we investigated.
5 code implementations • NeurIPS 2023 • Chunting Zhou, PengFei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy
Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences.
no code implementations • 24 Mar 2023 • PengFei Liu, Wenjin Deng, Hengda Li, Jintai Wang, Yinglin Zheng, Yiwei Ding, Xiaohu Guo, Ming Zeng
In this paper, we present a method for this task with natural motions of the lip, facial expression, head pose, and eye states.
1 code implementation • 7 Mar 2023 • Yixin Liu, Alexander R. Fabbri, Yilun Zhao, PengFei Liu, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev
Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics.
2 code implementations • 8 Feb 2023 • Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, PengFei Liu
Generative Artificial Intelligence (AI) has enabled the development of sophisticated models that are capable of producing high-caliber text, images, and other outputs through the utilization of large pre-trained models.
no code implementations • 23 Dec 2022 • Zhao Shan, Lei Wang, PengFei Liu, Tianyao Huang, Yimin Liu
To address this challenge, we use a novel iteratively selecting technique which breaks a difficult decision task into several easy tasks.
2 code implementations • 15 Dec 2022 • Yixin Liu, Alexander R. Fabbri, PengFei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, Dragomir Radev
Human evaluation is the foundation upon which the evaluation of both summarization systems and automatic metrics rests.
1 code implementation • 12 Dec 2022 • Yiwei Qin, Weizhe Yuan, Graham Neubig, PengFei Liu
Both have their advantages; discriminative metrics are able to directly optimize for the problem of distinguishing between good and bad outputs, while generative metrics can be trained using abundant raw text.
no code implementations • 12 Dec 2022 • Yiwei Qin, Graham Neubig, PengFei Liu
Recently, a large number of tuning strategies have been proposed to adapt pre-trained language models to downstream tasks.
3 code implementations • 18 Nov 2022 • Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, PengFei Liu, Yiming Yang, Jamie Callan, Graham Neubig
Much of this success can be attributed to prompting methods such as "chain-of-thought'', which employ LLMs for both understanding the problem description by decomposing it into steps, as well as solving each step of the problem.
Ranked #18 on Arithmetic Reasoning on GSM8K
2 code implementations • 13 Oct 2022 • Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, PengFei Liu, Chenguang Zhu, Heng Ji, Jiawei Han
We re-frame NLG evaluation as a Boolean Question Answering (QA) task, and by guiding the model with different questions, we can use one evaluator to evaluate from multiple dimensions.
no code implementations • 29 Aug 2022 • Yimin Yin, Renye Zhang, PengFei Liu, Wanxia Deng, Siliang He, Chen Li, Jinghua Zhang
To our best knowledge, this paper is the first comprehensive survey focusing on finger vein recognition based on artificial neural networks.
no code implementations • 23 Aug 2022 • Haris Widjaja, Kiril Gashteovski, Wiem Ben Rim, PengFei Liu, Christopher Malon, Daniel Ruffinelli, Carolin Lawrence, Graham Neubig
Knowledge Graphs (KGs) store information in the form of (head, predicate, tail)-triples.
2 code implementations • 22 Jun 2022 • Weizhe Yuan, PengFei Liu
In addition, we test our model in the 2022 College Entrance Examination English that happened a few days ago (2022. 06. 08), and it gets a total score of 134 (v. s.
1 code implementation • 22 Jun 2022 • Yiwei Ding, Wenjin Deng, Yinglin Zheng, PengFei Liu, Meihong Wang, Xuan Cheng, Jianmin Bao, Dong Chen, Ming Zeng
In this paper, we present the Intra- and Inter-Human Relation Networks (I^2R-Net) for Multi-Person Pose Estimation.
Ranked #2 on Multi-Person Pose Estimation on OCHuman
no code implementations • NAACL 2022 • Yang Xiao, Jinlan Fu, See-Kiong Ng, PengFei Liu
In this paper, we ask the research question of whether all the datasets in the benchmark are necessary.
1 code implementation • 29 Apr 2022 • Jinlan Fu, See-Kiong Ng, PengFei Liu
This paper aims for a potential architectural improvement for multilingual learning and asks: Can different tasks from different languages be modeled in a monolithic framework, i. e. without any task/language-specific module?
3 code implementations • ACL 2022 • Yixin Liu, PengFei Liu, Dragomir Radev, Graham Neubig
Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary.
Ranked #2 on Text Summarization on X-Sum
no code implementations • ACL 2022 • Yang Xiao, Jinlan Fu, Weizhe Yuan, Vijay Viswanathan, Zhoumianze Liu, Yixin Liu, Graham Neubig, PengFei Liu
Despite data's crucial role in machine learning, most existing tools and research tend to focus on systems on top of existing data rather than how to interpret and manipulate data.
no code implementations • 27 Jan 2022 • Chunyong Yang, PengFei Liu, Yanli Chen, Hongbin Wang, Min Liu
The end to end TTS system is VITS, and the pre-training self-supervised model is wav2vec 2. 0.
1 code implementation • 17 Jan 2022 • PengFei Liu, Kun Li, Helen Meng
Emotion recognition is a challenging and actively-studied research area that plays a critical role in emotion-aware human-computer interaction systems.
1 code implementation • 28 Jul 2021 • PengFei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig
This paper surveys and organizes research works in a new paradigm in natural language processing, which we dub "prompt-based learning".
1 code implementation • NeurIPS 2021 • Weizhe Yuan, Graham Neubig, PengFei Liu
In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models.
1 code implementation • Findings (ACL) 2021 • Priyam Tejaswin, Dhruv Naik, PengFei Liu
(2) The performance of models and reliability of metrics is dependent on sample complexity.
1 code implementation • ACL 2021 • Vijay Viswanathan, Graham Neubig, PengFei Liu
Automatically extracting key information from scientific documents has the potential to help scientists work more efficiently and accelerate the pace of scientific progress.
2 code implementations • ACL 2021 • Yixin Liu, PengFei Liu
In this paper, we present a conceptually simple while empirically powerful framework for abstractive summarization, SimCLS, which can bridge the gap between the learning objective and evaluation metrics resulting from the currently dominated sequence-to-sequence learning framework by formulating text generation as a reference-free evaluation problem (i. e., quality estimation) assisted by contrastive learning.
Ranked #4 on Text Summarization on X-Sum
1 code implementation • ACL 2021 • Jinlan Fu, Xuanjing Huang, PengFei Liu
Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction.
1 code implementation • 30 Apr 2021 • PengFei Liu, Kun Li, Helen Meng
User queries for a real-world dialog system may sometimes fall outside the scope of the system's capabilities, but appropriate system responses will enable smooth processing throughout the human-computer interaction.
1 code implementation • 25 Apr 2021 • PengFei Liu, Youzhang Ning, King Keung Wu, Kun Li, Helen Meng
This paper presents an unsupervised two-stage approach to discover intents and generate meaningful intent labels automatically from a collection of unlabeled utterances in a domain.
1 code implementation • NAACL 2021 • Yixin Liu, Zi-Yi Dou, PengFei Liu
Although some recent works show potential complementarity among different state-of-the-art systems, few works try to investigate this problem in text summarization.
1 code implementation • EMNLP 2021 • Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, PengFei Liu, Junjie Hu, Dan Garrette, Graham Neubig, Melvin Johnson
While a sizeable gap to human-level performance remains, improvements have been easier to achieve in some tasks than in others.
1 code implementation • ACL 2021 • PengFei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaicheng Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, Zi-Yi Dou, Graham Neubig
In this paper, we present a new conceptualization and implementation of NLP evaluation: the ExplainaBoard, which in addition to inheriting the functionality of the standard leaderboard, also allows researchers to (i) diagnose strengths and weaknesses of a single system (e. g.~what is the best-performing system bad at?)
1 code implementation • NAACL 2021 • Junqi Dai, Hang Yan, Tianxiang Sun, PengFei Liu, Xipeng Qiu
In this paper, we firstly compare the induced trees from PTMs and the dependency parsing trees on several popular models for the ABSA task, showing that the induced tree from fine-tuned RoBERTa (FT-RoBERTa) outperforms the parser-provided tree.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +1
no code implementations • NAACL 2021 • Jinlan Fu, Liangjing Feng, Qi Zhang, Xuanjing Huang, PengFei Liu
The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks.
1 code implementation • EACL 2021 • Zihuiwen Ye, PengFei Liu, Jinlan Fu, Graham Neubig
We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future.
1 code implementation • 30 Jan 2021 • Weizhe Yuan, PengFei Liu, Graham Neubig
The rapid development of science and technology has been accompanied by an exponential growth in peer-reviewed scientific publications.
no code implementations • 7 Jan 2021 • Yufei Zhao, Qiushi Yao, PengFei Liu, Jingzhi Han, Zhi Wang, Qihang Liu
The kernel of the study of magnetic quantum materials focuses on the magnetic phase transitions, among which the most common phenomenon is the transition between low-temperature magnetic-ordered phase to high-temperature paramagnetic phase.
Materials Science
1 code implementation • EMNLP 2020 • Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang
The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models.
2 code implementations • EMNLP 2020 • Jinlan Fu, PengFei Liu, Graham Neubig
With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits.
no code implementations • COLING 2020 • Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu
In text summarization, evaluating the efficacy of automatic metrics without human judgments has become recently popular.
1 code implementation • NAACL 2021 • Zi-Yi Dou, PengFei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig
Neural abstractive summarization models are flexible and can produce coherent summaries, but they are sometimes unfaithful and can be difficult to control.
1 code implementation • EMNLP 2020 • Manik Bhandari, Pranav Gour, Atabak Ashfaq, PengFei Liu, Graham Neubig
Automated evaluation metrics as a stand-in for manual evaluation are an essential part of the development of text-generation tasks such as text summarization.
2 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang
In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.
1 code implementation • ACL 2020 • Danqing Wang, PengFei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang
An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships.
1 code implementation • 20 Apr 2020 • Yong He, PengFei Liu, Xinsheng Zhang, Wang Zhou
We construct a Median-of-Means (MOM) estimator for the centered log-ratio covariance matrix and propose a thresholding procedure that is adaptive to the variability of individual entries.
Methodology
2 code implementations • ACL 2020 • Ming Zhong, PengFei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
Ranked #1 on Text Summarization on BBC XSum
1 code implementation • 12 Jan 2020 • Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang
While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations?
2 code implementations • ECCV 2020 • Peixuan Li, Huaici Zhao, PengFei Liu, Feidao Cao
Different from these approaches, our method predicts the nine perspective keypoints of a 3D bounding box in image space, and then utilize the geometric relationship of 3D and 2D perspectives to recover the dimension, location, and orientation in 3D space.
Ranked #6 on Vehicle Pose Estimation on KITTI Cars Hard
no code implementations • 7 Jan 2020 • PengFei Liu, Yimin Liu, Tianyao Huang, Yuxiang Lu, Xiqin Wang
In this paper, a decentralized spectrum allocation approach is presented to avoid mutual interference among automotive radars.
no code implementations • TACL 2020 • Ji Zhang, Chengyao Chen, PengFei Liu, Chao He, Cane Wing-Ki Leung
Second, it shows a strong advantage in determining the sentiment of a target when the context sentence contains multiple semantic segments.
no code implementations • 2 Dec 2019 • Qipeng Guo, Xipeng Qiu, PengFei Liu, xiangyang xue, Zheng Zhang
In this paper, we introduce the prior knowledge, multi-scale structure, into self-attention modules.
1 code implementation • 12 Nov 2019 • Tianxiang Sun, Yunfan Shao, Xiaonan Li, PengFei Liu, Hang Yan, Xipeng Qiu, Xuanjing Huang
Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing.
no code implementations • WS 2019 • Ming Zhong, Danqing Wang, PengFei Liu, Xipeng Qiu, Xuanjing Huang
In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models.
no code implementations • 25 Sep 2019 • Jin Zhang, Weipeng Ming, PengFei Liu
In the first stage, this method locates and recognizes the math symbols of input image by object detection algorithm.
no code implementations • 25 Sep 2019 • Jinlan Fu, PengFei Liu, Xuanjing Huang
With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits.
no code implementations • 30 Aug 2019 • Danqing Wang, PengFei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, Xuanjing Huang
Although domain shift has been well explored in many NLP applications, it still has received little attention in the domain of extractive text summarization.
1 code implementation • 29 Aug 2019 • Shuaichen Chang, PengFei Liu, Yun Tang, Jing Huang, Xiaodong He, Bo-Wen Zhou
Recent years have seen great success in the use of neural seq2seq models on the text-to-SQL task.
no code implementations • 25 Jul 2019 • Lin Zehui, PengFei Liu, Luyao Huang, Junkun Chen, Xipeng Qiu, Xuanjing Huang
Variants dropout methods have been designed for the fully-connected layer, convolutional layer and recurrent layer in neural networks, and shown to be effective to avoid overfitting.
2 code implementations • ACL 2019 • Ming Zhong, PengFei Liu, Danqing Wang, Xipeng Qiu, Xuanjing Huang
The recent years have seen remarkable success in the use of deep neural networks on text summarization.
Ranked #6 on Extractive Text Summarization on CNN / Daily Mail
1 code implementation • ACL 2019 • Dayiheng Liu, Jie Fu, PengFei Liu, Jiancheng Lv
Text infilling is defined as a task for filling in the missing part of a sentence or paragraph, which is suitable for many real-world natural language generation scenarios.
no code implementations • 24 Apr 2019 • PengFei Liu, Yimin Liu, Tianyao Huang, Yuxiang Lu, Xiqin Wang
The concept of cognitive radar (CR) enables radar systems to achieve intelligent adaption to a changeable environment with feedback facility from receiver to transmitter.
2 code implementations • NAACL 2019 • Qipeng Guo, Xipeng Qiu, PengFei Liu, Yunfan Shao, xiangyang xue, Zheng Zhang
Although Transformer has achieved great successes on many NLP tasks, its heavy structure with fully-connected attention connections leads to dependencies on large training data.
Ranked #13 on Sentiment Analysis on SST-5 Fine-grained classification
Named Entity Recognition (NER) Natural Language Inference +2
1 code implementation • 28 Dec 2018 • Pengfei Liu
Understanding the phenotypic drug response on cancer cell lines plays a vital rule in anti-cancer drug discovery and re-purposing.
no code implementations • 26 Nov 2018 • Pengfei Liu, Jie Fu, Yue Dong, Xipeng Qiu, Jackie Chi Kit Cheung
We present two architectures for multi-task learning with neural sequence models.
no code implementations • 21 Nov 2018 • Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, Jackie Chi Kit Cheung
Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which self-attention, as exemplified by the Transformer model, and graph neural networks (GNNs) have attracted much attention.
no code implementations • 23 Oct 2018 • Pengfei Liu, Xuanjing Huang
In this paper, we describe a general framework: Parameters Read-Write Networks (PRaWNs) to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.
no code implementations • 8 Aug 2018 • Pengfei Liu, Ji Zhang, Cane Wing-Ki Leung, Chao He, Thomas L. Griffiths
Effective representation of a text is critical for various natural language processing tasks.
no code implementations • 25 Feb 2018 • Junkun Chen, Xipeng Qiu, Pengfei Liu, Xuanjing Huang
Specifically, we use a shared meta-network to capture the meta-knowledge of semantic composition and generate the parameters of the task-specific semantic composition models.
no code implementations • EMNLP 2017 • Pengfei Liu, Kaiyu Qian, Xipeng Qiu, Xuanjing Huang
Idioms are peculiar linguistic constructions that impose great challenges for representing the semantics of language, especially in current prevailing end-to-end neural models, which assume that the semantics of a phrase or sentence can be literally composed from its constitutive words.
no code implementations • 11 May 2017 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Tree-structured neural networks have proven to be effective in learning semantic representations by exploiting syntactic information.
no code implementations • ACL 2017 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features.
no code implementations • 23 Sep 2016 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Neural network based models have achieved impressive results on various specific tasks.
no code implementations • 22 Jul 2016 • PengFei Liu, Xipeng Qiu, Xuanjing Huang
Introducing attentional mechanism in neural network is a powerful concept, and has achieved impressive results in many natural language processing tasks.
no code implementations • EMNLP 2016 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Recently, there is rising interest in modelling the interactions of two sentences with deep neural networks.
Ranked #73 on Natural Language Inference on SNLI
no code implementations • 17 May 2016 • Pengfei Liu, Xipeng Qiu, Xuanjing Huang
Neural network based methods have obtained great progress on a variety of natural language processing tasks.
Ranked #10 on Emotion Recognition in Conversation on CPED
Emotion Recognition in Conversation General Classification +3