no code implementations • EMNLP 2021 • Yong Guan, Shaoru Guo, Ru Li, XiaoLi Li, Hongye Tan
In this paper, we propose a novel Frame Semantic-Enhanced Sentence Modeling for Extractive Summarization, which leverages Frame semantics to model sentences from both intra-sentence level and inter-sentence level, facilitating the text summarization task.
no code implementations • EMNLP 2020 • Wenyue Zhang, XiaoLi Li, Yang Li, Suge Wang, Deyu Li, Jian Liao, Jianxing Zheng
Detecting public sentiment drift is a challenging task due to sentiment change over time.
no code implementations • EMNLP 2021 • Yong Guan, Shaoru Guo, Ru Li, XiaoLi Li, Hu Zhang
Recently graph-based methods have been adopted for Abstractive Text Summarization.
no code implementations • 9 May 2024 • Xue Geng, Zhe Wang, Chunyun Chen, Qing Xu, Kaixin Xu, Chao Jin, Manas Gupta, Xulei Yang, Zhenghua Chen, Mohamed M. Sabry Aly, Jie Lin, Min Wu, XiaoLi Li
To address these challenges, researchers have developed various model compression techniques such as model quantization and model pruning.
1 code implementation • 12 Apr 2024 • Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, XiaoLi Li
Time series data, characterized by its intrinsic long and short-range dependencies, poses a unique challenge across analytical applications.
no code implementations • 25 Mar 2024 • Eric H. C. Chow, TJ Kao, XiaoLi Li
This study delves into the potential use of Large Language Models (LLMs) for generating Library of Congress Subject Headings (LCSH).
1 code implementation • 21 Mar 2024 • Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu
Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.
no code implementations • 6 Mar 2024 • Yucheng Wang, Ruibing Jin, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
To capture these dependencies, Graph Neural Networks (GNNs) have emerged as powerful tools, yet their effectiveness is restricted by the quality of graph construction from MTS data.
1 code implementation • 19 Feb 2024 • Hezhe Qiao, Qingsong Wen, XiaoLi Li, Ee-Peng Lim, Guansong Pang
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the unsupervised setting in most GAD studies with a fully unlabeled graph.
no code implementations • 18 Feb 2024 • J. Senthilnath, Bangjian Zhou, Zhen Wei Ng, Deeksha Aggarwal, Rajdeep Dutta, Ji Wei Yoon, Aye Phyu Phyu Aung, Keyu Wu, Min Wu, XiaoLi Li
During the evolution of the autoencoder architecture, a bias-variance regulatory strategy is employed to elicit the optimal response from the RL agent.
no code implementations • 14 Feb 2024 • J. Senthilnath, Adithya Bhattiprolu, Ankur Singh, Bangjian Zhou, Min Wu, Jón Atli Benediktsson, XiaoLi Li
A novel online clustering algorithm is presented where an Evolving Restricted Boltzmann Machine (ERBM) is embedded with a Kohonen Network called ERBM-KNet.
no code implementations • 4 Feb 2024 • Quang Pham, Giang Do, Huy Nguyen, TrungTin Nguyen, Chenghao Liu, Mina Sartipi, Binh T. Nguyen, Savitha Ramasamy, XiaoLi Li, Steven Hoi, Nhat Ho
Sparse mixture of experts (SMoE) offers an appealing solution to scale up the model complexity beyond the mean of increasing the network's depth or width.
1 code implementation • 17 Jan 2024 • Saba Aslam, Abdur Rasool, Hongyan Wu, XiaoLi Li
This model aims to mitigate the catastrophic forgetting phenomenon in a domain incremental setting.
1 code implementation • 9 Jan 2024 • Anushiya Arunan, Yan Qin, XiaoLi Li, Chau Yuen
During online monitoring, the temporal correlation dynamics of a query device is monitored for breach of the control limit derived in offline training.
1 code implementation • 12 Dec 2023 • Giang Do, Khiem Le, Quang Pham, TrungTin Nguyen, Thanh-Nam Doan, Bint T. Nguyen, Chenghao Liu, Savitha Ramasamy, XiaoLi Li, Steven Hoi
By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models.
no code implementations • 17 Nov 2023 • Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
In this paper, we propose SEnsor Alignment (SEA) for MTS-UDA, aiming to reduce domain discrepancy at both the local and global sensor levels.
1 code implementation • 16 Oct 2023 • Yao Xiao, Lu Xu, Jiaxi Li, Wei Lu, XiaoLi Li
While prompt tuning approaches have achieved competitive performance with high efficiency, we observe that they invariably employ the same initialization process, wherein the soft prompt is either randomly initialized or derived from an existing embedding vocabulary.
5 code implementations • 16 Oct 2023 • Ming Jin, Qingsong Wen, Yuxuan Liang, Chaoli Zhang, Siqiao Xue, Xue Wang, James Zhang, Yi Wang, Haifeng Chen, XiaoLi Li, Shirui Pan, Vincent S. Tseng, Yu Zheng, Lei Chen, Hui Xiong
In this survey, we offer a comprehensive and up-to-date review of large models tailored (or adapted) for time series and spatio-temporal data, spanning four key facets: data types, model categories, model scopes, and application areas/tasks.
1 code implementation • 11 Sep 2023 • Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
As MTS data typically originate from multiple sensors, ensuring spatial consistency becomes essential for the overall performance of contrastive learning on MTS data.
1 code implementation • 11 Sep 2023 • Yucheng Wang, Yuecong Xu, Jianfei Yang, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
For graph construction, we design a decay graph to connect sensors across all timestamps based on their temporal distances, enabling us to fully model the ST dependencies by considering the correlations between DEDT.
2 code implementations • ICCV 2023 • Kaixin Xu, Zhe Wang, Xue Geng, Jie Lin, Min Wu, XiaoLi Li, Weisi Lin
On ImageNet, we achieve up to 4. 7% and 4. 6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively.
no code implementations • 18 Aug 2023 • Ruibing Jin, Guosheng Lin, Min Wu, Jie Lin, Zhengguo Li, XiaoLi Li, Zhenghua Chen
To address this issue, we propose an unlimited knowledge distillation (UKD) in this paper.
1 code implementation • 14 Jul 2023 • Mohamed Ragab, Emadeldeen Eldele, Min Wu, Chuan-Sheng Foo, XiaoLi Li, Zhenghua Chen
The existing SFDA methods that are mainly designed for visual applications may fail to handle the temporal dynamics in time series, leading to impaired adaptation performance.
1 code implementation • 7 Jul 2023 • Qing Xu, Min Wu, XiaoLi Li, Kezhi Mao, Zhenghua Chen
More specifically, a feature-domain discriminator is employed to align teacher's and student's representations for universal knowledge transfer.
1 code implementation • 30 May 2023 • Jing Wang, Aixin Sun, Hao Zhang, XiaoLi Li
Given a query, the task of Natural Language Video Localization (NLVL) is to localize a temporal moment in an untrimmed video that semantically matches the query.
no code implementations • 15 May 2023 • Ziyuan Zhao, Peisheng Qian, Xulei Yang, Zeng Zeng, Cuntai Guan, Wai Leong Tam, XiaoLi Li
Protein-protein interactions (PPIs) are crucial in various biological processes and their study has significant implications for drug development and disease diagnosis.
no code implementations • 13 May 2023 • Anushiya Arunan, Yan Qin, XiaoLi Li, Chau Yuen
The algorithm searches across the heterogeneous locally trained models and matches neurons with probabilistically similar feature extraction functions first, before selectively averaging them to form the federated model parameters.
1 code implementation • 7 May 2023 • Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li
We formally define the research problem of tuple-level speculation detection and conduct a detailed data analysis on the LSOIE dataset which contains labels for speculative tuples.
1 code implementation • 5 May 2023 • Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li
Accordingly, we propose a simple BERT-based model for sentence chunking, and propose Chunk-OIE for tuple extraction on top of SaC.
no code implementations • ICCV 2023 • Yuecong Xu, Jianfei Yang, Yunjiao Zhou, Zhenghua Chen, Min Wu, XiaoLi Li
We thus consider a more realistic \textit{Few-Shot Video-based Domain Adaptation} (FSVDA) scenario where we adapt video models with only a few target video samples.
no code implementations • 13 Feb 2023 • Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li
The scarcity of labeled data is one of the main challenges of applying deep learning models on time series data in the real world.
1 code implementation • 5 Dec 2022 • Kuicai Dong, Aixin Sun, Jung-jae Kim, XiaoLi Li
In this paper, we model both constituency and dependency trees into word-level graphs, and enable neural OpenIE to learn from the syntactic structures.
Ranked #1 on Open Information Extraction on LSOIE-wiki
1 code implementation • 3 Dec 2022 • Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li
Specifically, we propose a novel temporal mixup strategy to generate two intermediate augmented views for the source and target domains.
no code implementations • 17 Nov 2022 • Yuecong Xu, Haozhi Cao, Zhenghua Chen, XiaoLi Li, Lihua Xie, Jianfei Yang
To tackle performance degradation and address concerns in high video annotation cost uniformly, the video unsupervised domain adaptation (VUDA) is introduced to adapt video models from the labeled source domain to the unlabeled target domain by alleviating video domain shift, improving the generalizability and portability of video models.
1 code implementation • 10 Oct 2022 • Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li
The past few years have witnessed a remarkable advance in deep learning for EEG-based sleep stage classification (SSC).
2 code implementations • 13 Aug 2022 • Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li, Cuntai Guan
Specifically, we propose time-series specific weak and strong augmentations and use their views to learn robust temporal relations in the proposed temporal contrasting module, besides learning discriminative representations by our proposed contextual contrasting module.
no code implementations • 10 Aug 2022 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Min Wu, XiaoLi Li, Lihua Xie, Zhenghua Chen
To enable video models to be applied seamlessly across video tasks in different environments, various Video Unsupervised Domain Adaptation (VUDA) methods have been proposed to improve the robustness and transferability of video models.
no code implementations • 8 May 2022 • Zhenghua Chen, Min Wu, Alvin Chan, XiaoLi Li, Yew-Soon Ong
We believe that this technical review can help to promote a sustainable development of AI R&D activities for the research community.
1 code implementation • 2 May 2022 • Zhiwei Hu, Víctor Gutiérrez-Basulto, Zhiliang Xiang, XiaoLi Li, Ru Li, Jeff Z. Pan
Multi-hop reasoning over real-life knowledge graphs (KGs) is a highly challenging problem as traditional subgraph matching methods are not capable to deal with noise and missing information.
no code implementations • 30 Mar 2022 • Yan Qin, Chau Yuen, Yimin Shao, Bo Qin, XiaoLi Li
Similarly, the estimation accuracy of the milling machine has been improved by 23. 57% compared to LSTM and 19. 54% compared to CapsNet.
1 code implementation • 15 Mar 2022 • Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li
Our evaluation includes adapting state-of-the-art visual domain adaptation methods to time series data as well as the recent methods specifically developed for time series data.
no code implementations • 19 Feb 2022 • Yuecong Xu, Jianfei Yang, Haozhi Cao, Jianxiong Yin, Zhenghua Chen, XiaoLi Li, Zhengguo Li, Qianwen Xu
While action recognition (AR) has gained large improvements with the introduction of large-scale video datasets and the development of deep neural networks, AR models robust to challenging environments in real-world scenarios are still under-explored.
1 code implementation • 29 Nov 2021 • Mohamed Ragab, Emadeldeen Eldele, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li
Second, we propose a novel autoregressive domain adaptation technique that incorporates temporal dependency of both source and target features during domain alignment.
no code implementations • 29 Sep 2021 • Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo, Zhenghua Chen, Min Wu, Chee Kwoh, XiaoLi Li
Our evaluation includes adaptations of state-of-the-art visual domain adaptation methods to time series data in addition to recent methods specifically developed for time series data.
no code implementations • 15 Aug 2021 • Peisheng Qian, Ziyuan Zhao, Cong Chen, Zeng Zeng, XiaoLi Li
Diabetic retinopathy (DR) is one of the most common eye conditions among diabetic patients.
no code implementations • ACL 2021 • Xuefeng Su, Ru Li, XiaoLi Li, Jeff Z. Pan, Hu Zhang, Qinghua Chai, Xiaoqi Han
In this paper, we propose a Knowledge-Guided Frame Identification framework (KGFI) that integrates three types frame knowledge, including frame definitions, frame elements and frame-to-frame relations, to learn better frame representation, which guides the KGFI to jointly map target words and frames into the same embedding space and subsequently identify the best frame by calculating the dot-product similarity scores between the target word embedding and all of the frame embeddings.
2 code implementations • 1 Aug 2021 • Chunjiang Che, XiaoLi Li, Chuan Chen, Xiaoyu He, Zibin Zheng
In addition, we theoretically analyze and prove the convergence of CMFL under different election and selection strategies, which coincides with the experimental results.
1 code implementation • 9 Jul 2021 • Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, XiaoLi Li, Cuntai Guan
Second, we design an iterative self-training strategy to improve the classification performance on the target domain via target domain pseudo labels.
no code implementations • ACL 2021 • Thanh-Tung Nguyen, Xuan-Phi Nguyen, Shafiq Joty, XiaoLi Li
We introduce a generic seq2seq parsing framework that casts constituency parsing problems (syntactic and discourse parsing) into a series of conditional splitting decisions.
1 code implementation • 26 Jun 2021 • Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee Keong Kwoh, XiaoLi Li, Cuntai Guan
In this paper, we propose an unsupervised Time-Series representation learning framework via Temporal and Contextual Contrasting (TS-TCC), to learn time-series representation from unlabeled data.
Ranked #1 on Recognizing And Localizing Human Actions on HAR
Automatic Sleep Stage Classification Contrastive Learning +9
no code implementations • 16 Jun 2021 • XiaoLi Li
Dependent Dirichlet processes (DDP) have been widely applied to model data from distributions over collections of measures which are correlated in some way.
no code implementations • 15 Jun 2021 • XiaoLi Li
To solve this issue, we propose Constructivism learning for instance-dependent Dropout Architecture (CODA), which is inspired from a philosophical theory, constructivism learning.
1 code implementation • NAACL 2021 • Thanh-Tung Nguyen, Xuan-Phi Nguyen, Shafiq Joty, XiaoLi Li
We introduce a novel top-down end-to-end formulation of document-level discourse parsing in the Rhetorical Structure Theory (RST) framework.
Ranked #1 on Discourse Parsing on RST-DT
1 code implementation • Findings (ACL) 2021 • Kuicai Dong, Yilin Zhao, Aixin Sun, Jung-jae Kim, XiaoLi Li
Both DocOIE dataset and DocIE model are released for public.
Ranked #1 on Open Information Extraction on DocOIE-transportation
1 code implementation • 28 Apr 2021 • Emadeldeen Eldele, Zhenghua Chen, Chengyu Liu, Min Wu, Chee-Keong Kwoh, XiaoLi Li, Cuntai Guan
The MRCNN can extract low and high frequency features and the AFR is able to improve the quality of the extracted features by modeling the inter-dependencies between the features.
Ranked #1 on Automatic Sleep Stage Classification on Sleep-EDF
no code implementations • CVPR 2022 • Aye Phyu Phyu Aung, Xinrun Wang, Runsheng Yu, Bo An, Senthilnath Jayavelu, XiaoLi Li
In this paper, we propose a new approach to train Generative Adversarial Networks (GANs) where we deploy a double-oracle framework using the generator and discriminator oracles.
no code implementations • 6 Feb 2021 • Yuxiao Lu, Jie Lin, Chao Jin, Zhe Wang, Min Wu, Khin Mi Mi Aung, XiaoLi Li
Despite the faster HECNN inference, the mainstream packing schemes Dense Packing (DensePack) and Convolution Packing (ConvPack) introduce expensive rotation overhead, which prolongs the inference latency of HECNN for deeper and wider CNN architectures.
no code implementations • COLING 2020 • Shaoru Guo, Yong Guan, Ru Li, XiaoLi Li, Hongye Tan
Machine reading comprehension (MRC) is one of the most critical yet challenging tasks in natural language understanding(NLU), where both syntax and semantics information of text are essential components for text understanding.
Machine Reading Comprehension Natural Language Understanding
no code implementations • 29 Oct 2020 • Ziyuan Zhao, Kartik Chopra, Zeng Zeng, XiaoLi Li
Diabetes is one of the most common disease in individuals.
no code implementations • 17 Mar 2016 • Honghai Yu, Pierre Moulin, Hong Wei Ng, XiaoLi Li
In particular, we propose a block K-means hashing (B-KMH) method to obtain significantly improved retrieval performance with no increase in storage and marginal increase in computational cost.