no code implementations • COLING (CogALex) 2020 • Rong Xiang, Emmanuele Chersoni, Luca Iacoponi, Enrico Santus
One containing pairs for each of the training languages (systems were evaluated in a monolingual fashion) and the other proposing a surprise language to test the crosslingual transfer capabilities of the systems.
no code implementations • CMCL (ACL) 2022 • Lavinia Salicchi, Rong Xiang, Yu-Yin Hsu
Eye movement data are used in psycholinguistic studies to infer information regarding cognitive processes during reading.
no code implementations • LREC (BUCC) 2022 • Trina Kwong, Emmanuele Chersoni, Rong Xiang
In free word association tasks, human subjects are presented with a stimulus word and are then asked to name the first word (the response word) that comes up to their mind.
no code implementations • 29 Jan 2024 • Yi Zhao, Yilin Zhang, Rong Xiang, Jing Li, Hillming Li
Visually Impaired Assistance (VIA) aims to automatically help the visually impaired (VI) handle daily activities.
1 code implementation • 12 Jun 2023 • Yixia Li, Rong Xiang, Yanlin Song, Jing Li
Social media platforms are essential outlets for expressing opinions, providing a valuable resource for capturing public viewpoints via text analytics.
Ranked #1 on Answer Generation on WeiboPolls
no code implementations • SEMEVAL 2021 • Rong Xiang, Jinghang Gu, Emmanuele Chersoni, Wenjie Li, Qin Lu, Chu-Ren Huang
In this contribution, we describe the system presented by the PolyU CBS-Comp Team at the Task 1 of SemEval 2021, where the goal was the estimation of the complexity of words in a given sentence context.
no code implementations • Joint Conference on Lexical and Computational Semantics 2020 • Emmanuele Chersoni, Rong Xiang, Qin Lu, Chu-Ren Huang
Our experiments focused on crosslingual word embeddings, in order to predict modality association scores by training on a high-resource language and testing on a low-resource one.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Rong Xiang, Mingyu Wan, Qi Su, Chu-Ren Huang, Qin Lu
Mandarin Alphabetical Word (MAW) is one indispensable component of Modern Chinese that demonstrates unique code-mixing idiosyncrasies influenced by language exchanges.
no code implementations • WS 2020 • Mingyu WAN, Kathleen Ahrens, Emmanuele Chersoni, Menghan Jiang, Qi Su, Rong Xiang, Chu-Ren Huang
This paper reports a linguistically-enriched method of detecting token-level metaphors for the second shared task on Metaphor Detection.
no code implementations • LREC 2020 • Rong Xiang, Xuefeng Gao, Yunfei Long, Anran Li, Emmanuele Chersoni, Qin Lu, Chu-Ren Huang
Automatic Chinese irony detection is a challenging task, and it has a strong impact on linguistic research.
no code implementations • LREC 2020 • Rong Xiang, Yunfei Long, Mingyu Wan, Jinghang Gu, Qin Lu, Chu-Ren Huang
Deep neural network models have played a critical role in sentiment analysis with promising results in the recent decade.
no code implementations • WS 2019 • Wenhao Ying, Rong Xiang, Qin Lu
Deep learning based general language models have achieved state-of-the-art results in many popular tasks such as sentiment analysis and QA tasks.
Ranked #2 on Emotion Classification on SemEval 2018 Task 1E-c
no code implementations • WS 2019 • Mingyu Wan, Rong Xiang, Emmanuele Chersoni, Natalia Klyueva, Kathleen Ahrens, Bin Miao, David Broadstock, Jian Kang, Amos Yung, Chu-Ren Huang
no code implementations • WS 2018 • Rong Xiang, Yunfei Long, Qin Lu, Dan Xiong, I-Hsuan Chen
Then representation of the major text is learned through an LSTM model whereas the minor text is learned by a separate CNN model.
no code implementations • WS 2018 • Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang
In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks.
Ranked #6 on Sentiment Analysis on User and product information
no code implementations • IJCNLP 2017 • Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang
This paper proposes a novel method to incorporate speaker profiles into an attention based LSTM model for fake news detection.
no code implementations • EMNLP 2017 • Yunfei Long, Qin Lu, Rong Xiang, Minglei Li, Chu-Ren Huang
Evaluations show the CBA based method outperforms the state-of-the-art local context based attention methods significantly.