no code implementations • 21 Feb 2024 • Liyan Xu, Jiangnan Li, Mo Yu, Jie zhou
This work introduces a novel and practical paradigm for narrative comprehension, stemming from the observation that individual passages within narratives are often cohesively related than being isolated.
no code implementations • 20 Feb 2024 • Liyan Xu, Zhenlin Su, Mo Yu, Jin Xu, Jinho D. Choi, Jie zhou, Fei Liu
Factual inconsistency poses a significant hurdle for the commercial deployment of abstractive summarizers.
no code implementations • 11 Feb 2024 • Jiangnan Li, Qiujing Wang, Liyan Xu, Wenjie Pang, Mo Yu, Zheng Lin, Weiping Wang, Jie zhou
Similar to the "previously-on" scenes in TV shows, recaps can help book reading by recalling the readers' memory about the important elements in previous texts to better understand the ongoing plot.
1 code implementation • 22 Dec 2023 • Zhenlin Su, Liyan Xu, Jin Xu, Jiangnan Li, Mingdu Huangfu
Identifying speakers of quotations in narratives is an important task in literary analysis, with challenging scenarios including the out-of-domain inference for unseen speakers, and non-explicit cases where there are no speaker mentions in surrounding context.
no code implementations • 1 Dec 2023 • Yeshuo Shu, Gangcheng Zhang, Keyi Liu, Jintong Tang, Liyan Xu
Human mobility demonstrates a high degree of regularity, which facilitates the discovery of lifestyle profiles.
1 code implementation • 26 May 2023 • Liyan Xu, Chenwei Zhang, Xian Li, Jingbo Shang, Jinho D. Choi
We present a new task setting for attribute mining on e-commerce products, serving as a practical solution to extract open-world attributes without extensive human intervention.
1 code implementation • 9 Nov 2022 • Mo Yu, Qiujing Wang, Shunchi Zhang, Yisi Sang, Kangsheng Pu, Zekai Wei, Han Wang, Liyan Xu, Jing Li, Yue Yu, Jie zhou
Our dataset consists of ~1, 000 parsed movie scripts, each corresponding to a few-shot character understanding task that requires models to mimic humans' ability of fast digesting characters with a few starting scenes in a new movie.
no code implementations • *SEM (NAACL) 2022 • Liyan Xu, Jinho D. Choi
This paper suggests a direction of coreference resolution for online decoding on actively generated input such as dialogue, where the model accepts an utterance and its past context, then finds mentions in the current utterance as well as their referents, upon each dialogue turn.
no code implementations • 7 May 2022 • Dhanasekar Sundararaman, Vivek Subramanian, Guoyin Wang, Liyan Xu, Lawrence Carin
Numbers are essential components of text, like any other word tokens, from which natural language processing (NLP) models are built and deployed.
no code implementations • NAACL 2022 • Liyan Xu, Jinho D. Choi
We target on the document-level relation extraction in an end-to-end setting, where the model needs to jointly perform mention extraction, coreference resolution (COREF) and relation extraction (RE) at once, and gets evaluated in an entity-centric way.
Ranked #3 on Joint Entity and Relation Extraction on DocRED
coreference-resolution Document-level Relation Extraction +2
no code implementations • 2 Feb 2022 • Liyan Xu, Yile Gu, Jari Kolehmainen, Haidar Khan, Ankur Gandhe, Ariya Rastrow, Andreas Stolcke, Ivan Bulyko
Specifically, training a bidirectional model like BERT on a discriminative objective such as minimum WER (MWER) has not been explored.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
1 code implementation • 1 Dec 2021 • Liyan Xu, Xuchao Zhang, Bo Zong, Yanchi Liu, Wei Cheng, Jingchao Ni, Haifeng Chen, Liang Zhao, Jinho D. Choi
We target the task of cross-lingual Machine Reading Comprehension (MRC) in the direct zero-shot setting, by incorporating syntactic features from Universal Dependencies (UD), and the key features we use are the syntactic relations within each sentence.
1 code implementation • 8 Sep 2021 • Han He, Liyan Xu, Jinho D. Choi
We introduce ELIT, the Emory Language and Information Toolkit, which is a comprehensive NLP framework providing transformer-based end-to-end models for core tasks with a special focus on memory efficiency while maintaining state-of-the-art accuracy and speed.
no code implementations • ACL (CODI, CRAC) 2021 • Liyan Xu, Jinho D. Choi
We present an effective system adapted from the end-to-end neural coreference resolution model, targeting on the task of anaphora resolution in dialogues.
1 code implementation • EMNLP 2021 • Liyan Xu, Xuchao Zhang, Xujiang Zhao, Haifeng Chen, Feng Chen, Jinho D. Choi
Recent multilingual pre-trained language models have achieved remarkable zero-shot performance, where the model is only finetuned on one source language and directly evaluated on target languages.
1 code implementation • EMNLP 2020 • Liyan Xu, Jinho D. Choi
We find that given a high-performing encoder such as SpanBERT, the impact of HOI is negative to marginal, providing a new perspective of HOI to this task.
Ranked #6 on Coreference Resolution on CoNLL 2012
no code implementations • WS 2020 • Liyan Xu, Julien Hogan, Rachel E. Patzer, Jinho D. Choi
This paper presents a reinforcement learning approach to extract noise in long clinical documents for the task of readmission prediction after kidney transplant.