no code implementations • 25 Apr 2024 • Han Liu, Yinwei Wei, Xuemeng Song, Weili Guan, Yuan-Fang Li, Liqiang Nie
Multimodal recommendation aims to recommend user-preferred candidates based on her/his historically interacted items and associated multimodal information.
no code implementations • 24 Apr 2024 • Cristian Rojas, Frank Algra-Maschio, Mark Andrejevic, Travis Coan, John Cook, Yuan-Fang Li
In this study, we address this gap by developing a two-step hierarchical model, the Augmented CARDS model, specifically designed for detecting contrarian climate claims on Twitter.
1 code implementation • 20 Apr 2024 • Jingqi Kang, Tongtong Wu, Jinming Zhao, Guitao Wang, Yinwei Wei, Hao Yang, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.'
no code implementations • 28 Mar 2024 • Rihui Jin, Yu Li, Guilin Qi, Nan Hu, Yuan-Fang Li, Jiaoyan Chen, Jianan Wang, Yongrui Chen, Dehai Min
Table understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HGT, a framework with a heterogeneous graph (HG)-enhanced large language model (LLM) to tackle few-shot TU tasks. It leverages the LLM by aligning the table semantics with the LLM's parametric knowledge through soft prompts and instruction turning and deals with complex tables by a multi-task pre-training scheme involving three novel multi-granularity self-supervised HG pre-training objectives. We empirically demonstrate the effectiveness of HGT, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.
no code implementations • 18 Feb 2024 • Farhad Moghimifar, Yuan-Fang Li, Robert Thomson, Gholamreza Haffari
Coalition negotiations are a cornerstone of parliamentary democracies, characterised by complex interactions and strategic communications among political parties.
no code implementations • 17 Feb 2024 • Minh-Vuong Nguyen, Linhao Luo, Fatemeh Shiri, Dinh Phung, Yuan-Fang Li, Thuy-Trang Vu, Gholamreza Haffari
Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought (CoT) explanations alongside answers.
no code implementations • 2 Feb 2024 • Tongtong Wu, Linhao Luo, Yuan-Fang Li, Shirui Pan, Thuy-Trang Vu, Gholamreza Haffari
Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale.
1 code implementation • 27 Jan 2024 • Jingqi Kang, Tongtong Wu, Jinming Zhao, Guitao Wang, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
While text-based event extraction has been an active research area and has seen successful application in many domains, extracting semantic events from speech directly is an under-explored problem.
no code implementations • 26 Jan 2024 • Tao He, Tongtong Wu, Dongyang Zhang, Guiduo Duan, Ke Qin, Yuan-Fang Li
Besides, extensive experiments on the two mainstream benchmark datasets, VG and Open-Image(v6), show the superiority of our proposed model to a number of competitive SGG models in terms of continuous learning and conventional settings.
2 code implementations • 5 Jan 2024 • Xin Ma, Yaohui Wang, Gengyun Jia, Xinyuan Chen, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao
We propose a novel Latent Diffusion Transformer, namely Latte, for video generation.
1 code implementation • 30 Dec 2023 • Jingjing Xu, Caesar Wu, Yuan-Fang Li, Pascal Bouvry
From the model perspective, one of the PCA-enhanced models: PCA+Crossformer, reduces mean square errors (MSE) by 33. 3% and decreases runtime by 49. 2% on average.
no code implementations • 1 Dec 2023 • Shilin Qu, Weiqing Wang, Yuan-Fang Li, Xin Zhou, Fajie Yuan
HGraphormer injects the hypergraph structure information (local information) into Transformers (global information) by combining the attention matrix and hypergraph Laplacian.
no code implementations • 21 Nov 2023 • Caesar Wu, Yuan-Fang Li, Jian Li, Jingjing Xu, Bouvry Pascal
We aim to use this framework to conduct the TAI experiments by quantitive and qualitative research methods to satisfy TAI properties for the decision-making context.
no code implementations • 3 Nov 2023 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
In light of this, we introduce SG2HOI+, a unified one-step model based on the Transformer architecture.
no code implementations • 24 Oct 2023 • Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
One representative benchmark for its study is Social Intelligence Queries (Social-IQ), a dataset of multiple-choice questions on videos of complex social interactions.
2 code implementations • 3 Oct 2023 • Ming Jin, Shiyu Wang, Lintao Ma, Zhixuan Chu, James Y. Zhang, Xiaoming Shi, Pin-Yu Chen, Yuxuan Liang, Yuan-Fang Li, Shirui Pan, Qingsong Wen
We begin by reprogramming the input time series with text prototypes before feeding it into the frozen LLM to align the two modalities.
1 code implementation • 2 Oct 2023 • Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan
In this paper, we propose a novel method called reasoning on graphs (RoG) that synergizes LLMs with KGs to enable faithful and interpretable reasoning.
1 code implementation • 4 Sep 2023 • Linhao Luo, Jiaxin Ju, Bo Xiong, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan
Logical rules are essential for uncovering the logical connections between relations, which could improve reasoning performance and provide interpretable results on knowledge graphs (KGs).
no code implementations • 12 Aug 2023 • Tahsina Hashem, Weiqing Wang, Derry Tanti Wijaya, Mohammed Eunus Ali, Yuan-Fang Li
Knowledge Graph (KG)-to-Text generation aims at generating fluent natural-language text that accurately represents the information of a given knowledge graph.
no code implementations • 10 Aug 2023 • Lianli Gao, Xinyu Lyu, Yuyu Guo, Yuxuan Hu, Yuan-Fang Li, Lu Xu, Heng Tao Shen, Jingkuan Song
It integrates two components: Semantic Debiasing (SD) and Balanced Predicate Learning (BPL), for these imbalances.
no code implementations • 26 May 2023 • Farhad Moghimifar, Shilin Qu, Tongtong Wu, Yuan-Fang Li, Gholamreza Haffari
Norms, which are culturally accepted guidelines for behaviours, can be integrated into conversational models to generate utterances that are appropriate for the socio-cultural context.
no code implementations • 11 May 2023 • Ming Jin, Guangsi Shi, Yuan-Fang Li, Qingsong Wen, Bo Xiong, Tian Zhou, Shirui Pan
In this paper, we establish a theoretical framework that unravels the expressive power of spectral-temporal GNNs.
no code implementations • 4 May 2023 • Fatemeh Shiri, Teresa Wang, Shirui Pan, Xiaojun Chang, Yuan-Fang Li, Reza Haffari, Van Nguyen, Shuang Yu
In order to exploit the potentially useful and rich information from such sources, it is necessary to extract not only the relevant entities and concepts but also their semantic relations, together with the uncertainty associated with the extracted knowledge (i. e., in the form of probabilistic knowledge graphs).
no code implementations • 4 May 2023 • Farhad Moghimifar, Fatemeh Shiri, Van Nguyen, Reza Haffari, Yuan-Fang Li
In this paper, we present a novel domain-adaptive visually-fused event detection approach that can be trained on a few labelled image-text paired data points.
1 code implementation • 17 Apr 2023 • Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC).
no code implementations • 30 Jan 2023 • Terry Yue Zhuo, Zhuang Li, Yujin Huang, Fatemeh Shiri, Weiqing Wang, Gholamreza Haffari, Yuan-Fang Li
Semantic parsing is a technique aimed at constructing a structured representation of the meaning of a natural-language question.
1 code implementation • 9 Nov 2022 • Wannita Takerngsaksiri, Chakkrit Tantithamthavorn, Yuan-Fang Li
However, existing syntax-aware code completion approaches are not on-the-fly, as we found that for every two-thirds of characters that developers type, AST fails to be extracted because it requires the syntactically correct source code, limiting its practicality in real-world scenarios.
no code implementations • 7 Nov 2022 • Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
Multi-hop reading comprehension requires not only the ability to reason over raw text but also the ability to combine multiple evidence.
1 code implementation • 17 Oct 2022 • Tongtong Wu, Guitao Wang, Jinming Zhao, Zhaoran Liu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari
We explore speech relation extraction via two approaches: the pipeline approach conducting text-based extraction with a pretrained ASR module, and the end2end approach via a new proposed encoder-decoder model, or what we called SpeechRE.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • COLING 2022 • Jiayi Chen, Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
Answering complex questions that require multi-step multi-type reasoning over raw text is challenging, especially when conducting numerical reasoning.
no code implementations • 17 Aug 2022 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
In this paper, we introduce open-vocabulary scene graph generation, a novel, realistic and challenging setting in which a model is trained on a set of base object classes but is required to infer relations for unseen target object classes.
no code implementations • 2 Jun 2022 • Lianli Gao, Pengpeng Zeng, Jingkuan Song, Yuan-Fang Li, Wu Liu, Tao Mei, Heng Tao Shen
To date, visual question answering (VQA) (i. e., image QA and video QA) is still a holy grail in vision and language understanding, especially for video QA.
no code implementations • 21 Mar 2022 • Fatemeh Shiri, Terry Yue Zhuo, Zhuang Li, Van Nguyen, Shirui Pan, Weiqing Wang, Reza Haffari, Yuan-Fang Li
In this paper, we investigate how to exploit paraphrasing methods for the automated generation of large-scale training datasets (in the form of paraphrased utterances and their corresponding logical forms in SQL format) and present our experimental results using real-world data in the maritime domain.
no code implementations • 12 Mar 2022 • Kang Xu, Xiaoqiu Lu, Yuan-Fang Li, Tongtong Wu, Guilin Qi, Ning Ye, Dong Wang, Zheng Zhou
NTM-DMIE is a neural network method for topic learning which maximizes the mutual information between the input documents and their latent topic representation.
1 code implementation • 17 Feb 2022 • Ming Jin, Yu Zheng, Yuan-Fang Li, Siheng Chen, Bin Yang, Shirui Pan
Multivariate time series forecasting has long received significant attention in real-world applications, such as energy consumption and traffic prediction.
1 code implementation • 16 Dec 2021 • Abhik Bhattacharjee, Tahmid Hasan, Wasi Uddin Ahmad, Yuan-Fang Li, Yong-Bin Kang, Rifat Shahriyar
LaSE is strongly correlated with ROUGE and, unlike ROUGE, can be reliably measured even in the absence of references in the target language.
Abstractive Text Summarization Cross-Lingual Abstractive Summarization +1
no code implementations • 20 Nov 2021 • Yizhen Zheng, Ming Jin, Shirui Pan, Yuan-Fang Li, Hao Peng, Ming Li, Zhao Li
To overcome the aforementioned problems, we introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming, namely G-Zoom, to learn node representations by leveraging the proposed adjusted zooming scheme.
no code implementations • 9 Nov 2021 • Yong-Bin Kang, Abdur Rahim Mohammad Forkan, Prem Prakash Jayaraman, Natalie Wieland, Elizabeth Kollias, Hung Du, Steven Thomson, Yuan-Fang Li
There has been a recent and rapid shift to digital learning hastened by the pandemic but also influenced by ubiquitous availability of digital tools and platforms now, making digital learning ever more accessible.
no code implementations • Findings (EMNLP) 2021 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Lizhen Qu, Shirong Shen, Guilin Qi, Lu Pan, Yinlin Jiang
The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation.
no code implementations • 29 Sep 2021 • Ming Jin, Yuan-Fang Li, Yu Zheng, Bin Yang, Shirui Pan
Spatiotemporal representation learning on multivariate time series has received tremendous attention in forecasting traffic and energy data.
no code implementations • ICLR 2022 • Tongtong Wu, Massimo Caccia, Zhuang Li, Yuan-Fang Li, Guilin Qi, Gholamreza Haffari
In this paper, we thoroughly compare the continual learning performance over the combination of 5 PLMs and 4 veins of CL methods on 3 benchmarks in 2 typical incremental settings.
no code implementations • Findings (EMNLP) 2021 • Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
Numerical reasoning skills are essential for complex question answering (CQA) over text.
no code implementations • 20 Aug 2021 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
Abundant real-world data can be naturally represented by large-scale networks, which demands efficient and effective learning algorithms.
no code implementations • 20 Aug 2021 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
Learning accurate low-dimensional embeddings for a network is a crucial task as it facilitates many downstream network analytics tasks.
1 code implementation • ICCV 2021 • Tao He, Lianli Gao, Jingkuan Song, Yuan-Fang Li
Human-Object Interaction (HOI) detection is a fundamental visual task aiming at localizing and recognizing interactions between humans and objects.
no code implementations • 19 Aug 2021 • Tao He, Lianli Gao, Jingkuan Song, Jianfei Cai, Yuan-Fang Li
Scene graphs provide valuable information to many downstream tasks.
1 code implementation • Findings (ACL) 2021 • Tahmid Hasan, Abhik Bhattacharjee, Md Saiful Islam, Kazi Samin, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, Rifat Shahriyar
XL-Sum induces competitive results compared to the ones obtained using similar monolingual datasets: we show higher than 11 ROUGE-2 scores on 10 languages we benchmark on, with some of them exceeding 15, as obtained by multilingual training.
no code implementations • Findings (ACL) 2021 • Shirong Shen, Tongtong Wu, Guilin Qi, Yuan-Fang Li, Gholamreza Haffari, Sheng Bi
Event detection (ED) aims at detecting event trigger words in sentences and classifying them into specific event types.
1 code implementation • 12 May 2021 • Ming Jin, Yizhen Zheng, Yuan-Fang Li, Chen Gong, Chuan Zhou, Shirui Pan
To overcome this problem, inspired by the recent success of graph contrastive learning and Siamese networks in visual representation learning, we propose a novel self-supervised approach in this paper to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning.
no code implementations • 4 Feb 2021 • Bhagya Hettige, Weiqing Wang, Yuan-Fang Li, Suong Le, Wray Buntine
Although a point process (e. g., Hawkes process) is able to model a cascade temporal relationship, it strongly relies on a prior generative process assumption.
2 code implementations • 6 Jan 2021 • Tongtong Wu, Xuekai Li, Yuan-Fang Li, Reza Haffari, Guilin Qi, Yujin Zhu, Guoqiang Xu
We propose a novel curriculum-meta learning method to tackle the above two challenges in continual relation extraction.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Vishwajeet Kumar, Manish Joshi, Ganesh Ramakrishnan, Yuan-Fang Li
Question generation (QG) has recently attracted considerable attention.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Guilin Qi, Wei Wu, Jingyao Zhang, Daiqing Qi
Our framework consists of a neural generator and a symbolic executor that, respectively, transforms a natural-language question into a sequence of primitive actions, and executes them over the knowledge base to compute the answer.
1 code implementation • 29 Oct 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Wei Wu
However, this comes at the cost of manually labeling similar questions to learn a retrieval model, which is tedious and expensive.
1 code implementation • EMNLP 2020 • Yuncheng Hua, Yuan-Fang Li, Gholamreza Haffari, Guilin Qi, Tongtong Wu
Our method achieves state-of-the-art performance on the CQA dataset (Saha et al., 2018) while using only five trial trajectories for the top-5 retrieved questions in each support set, and metatraining on tasks constructed from only 1% of the training set.
Knowledge Base Question Answering Meta Reinforcement Learning +3
no code implementations • COLING 2020 • Xiao-Yu Guo, Yuan-Fang Li, Gholamreza Haffari
A prominent approach to this task is based on the programmer-interpreter framework, where the programmer maps the question into a sequence of reasoning actions which is then executed on the raw text by the interpreter.
no code implementations • COLING 2020 • Sheng Bi, Xiya Cheng, Yuan-Fang Li, Yongzhen Wang, Guilin Qi
Question generation over knowledge bases (KBQG) aims at generating natural-language questions about a subgraph, i. e. a set of (connected) triples.
1 code implementation • 1 Sep 2020 • Sarkar Snigdha Sarathi Das, Mohammed Eunus Ali, Yuan-Fang Li, Yong-Bin Kang, Timos Sellis
Extensive experiments with a large number of regression techniques show that the embeddings produced by our proposed GSNE technique consistently and significantly improve the performance of the house price prediction task regardless of the downstream regression model.
no code implementations • 13 Jun 2020 • Tao He, Lianli Gao, Jingkuan Song, Jianfei Cai, Yuan-Fang Li
Despite the huge progress in scene graph generation in recent years, its long-tail distribution in object relationships remains a challenging and pestering issue.
1 code implementation • 20 May 2020 • Zhipeng Gao, Xin Xia, John Grundy, David Lo, Yuan-Fang Li
Stack Overflow has been heavily used by software developers as a popular way to seek programming-related information from peers via the internet.
Software Engineering
1 code implementation • 8 Dec 2019 • Bhagya Hettige, Yuan-Fang Li, Weiqing Wang, Suong Le, Wray Buntine
To address these limitations, we present $\mathtt{MedGraph}$, a supervised EMR embedding method that captures two types of information: (1) the visit-code associations in an attributed bipartite graph, and (2) the temporal sequencing of visits through a point process.
1 code implementation • 2 Dec 2019 • Bhagya Hettige, Yuan-Fang Li, Weiqing Wang, Wray Buntine
Graph embedding methods transform high-dimensional and complex graph contents into low-dimensional representations.
Ranked #1 on Link Prediction on Pubmed (nonstandard variant)
no code implementations • 8 Nov 2019 • Vishwajeet Kumar, Raktim Chaki, Sai Teja Talluri, Ganesh Ramakrishnan, Yuan-Fang Li, Gholamreza Haffari
Specifically, we propose (a) a novel hierarchical BiLSTM model with selective attention and (b) a novel hierarchical Transformer architecture, both of which learn hierarchical representations of paragraphs.
no code implementations • CONLL 2019 • Vishwajeet Kumar, Ganesh Ramakrishnan, Yuan-Fang Li
The \textit{generator} is a sequence-to-sequence model that incorporates the \textit{structure} and \textit{semantics} of the question being generated.
no code implementations • IJCNLP 2019 • Vishwajeet Kumar, Sivaanandh Muneeswaran, Ganesh Ramakrishnan, Yuan-Fang Li
Generating syntactically and semantically valid and relevant questions from paragraphs is useful with many applications.
no code implementations • 2 Aug 2019 • Ying Yang, Michael Wybrow, Yuan-Fang Li, Tobias Czauderna, Yongqun He
Ontologies are formal representations of concepts and complex relationships among them.
1 code implementation • 1 Jul 2019 • Tao He, Yuan-Fang Li, Lianli Gao, Dongxiang Zhang, Jingkuan Song
We evaluate our framework on {four} public benchmark datasets, all of which show that our method is superior to the other state-of-the-art methods on the tasks of object recognition and image retrieval.
1 code implementation • 2 Jan 2019 • Wei Chen, Jincai Chen, Fuhao Zou, Yuan-Fang Li, Ping Lu, Qiang Wang, Wei Zhao
The inverted index structure is amenable to GPU-based implementations, and the state-of-the-art systems such as Faiss are able to exploit the massive parallelism offered by GPUs.
no code implementations • 15 Aug 2018 • Vishwajeet Kumar, Ganesh Ramakrishnan, Yuan-Fang Li
The {\it generator} is a sequence-to-sequence model that incorporates the {\it structure} and {\it semantics} of the question being generated.
no code implementations • 7 Mar 2018 • Vishwajeet Kumar, Kireeti Boorla, Yogesh Meena, Ganesh Ramakrishnan, Yuan-Fang Li
Neural network-based methods represent the state-of-the-art in question generation from text.
no code implementations • 1 Jun 2017 • Yuan-Fang Li, Ardavan Pedram
Our results suggest that smaller networks favor non-batched techniques while performance for larger networks is higher using batched operations.