1 code implementation • 24 May 2024 • Rui Miao, Kaixiong Zhou, Yili Wang, Ninghao Liu, Ying Wang, Xin Wang
We learn the joint distribution of node and cluster labels conditioned on their representations, and train GNNs with the obtained joint loss.
1 code implementation • ICLR 2024 • Yili Wang, Kaixiong Zhou, Ninghao Liu, Ying Wang, Xin Wang
Sharpness-aware minimization (SAM) has received increasing attention in computer vision since it can effectively eliminate the sharp local minima from the training trajectory and mitigate generalization degradation.
no code implementations • 17 Apr 2024 • Zihao Li, Yucheng Shi, Zirui Liu, Fan Yang, Ninghao Liu, Mengnan Du
However, currently there is no work to quantitatively measure the performance of LLMs in low-resource languages.
no code implementations • 28 Mar 2024 • Yucheng Shi, Qiaoyu Tan, Xuansheng Wu, Shaochen Zhong, Kaixiong Zhou, Ninghao Liu
Large Language Models (LLMs) have shown proficiency in question-answering tasks but often struggle to integrate real-time knowledge updates, leading to potentially outdated or inaccurate responses.
1 code implementation • 13 Mar 2024 • Xuansheng Wu, Haiyan Zhao, Yaochen Zhu, Yucheng Shi, Fan Yang, Tianming Liu, Xiaoming Zhai, Wenlin Yao, Jundong Li, Mengnan Du, Ninghao Liu
Therefore, in this paper, we introduce Usable XAI in the context of LLMs by analyzing (1) how XAI can benefit LLMs and AI systems, and (2) how LLMs can contribute to the advancement of XAI.
no code implementations • 25 Jan 2024 • John A. Miller, Mohammed Aldosari, Farah Saeed, Nasid Habib Barna, Subas Rana, I. Budak Arpinar, Ninghao Liu
Furthermore, there is a vast amount of knowledge available that deep learning models can tap into, including Knowledge Graphs and Large Language Models fine-tuned with scientific domain knowledge.
no code implementations • 22 Jan 2024 • Huaqin Zhao, Zhengliang Liu, Zihao Wu, Yiwei Li, Tianze Yang, Peng Shu, Shaochen Xu, Haixing Dai, Lin Zhao, Gengchen Mai, Ninghao Liu, Tianming Liu
Additionally, we conducted holistic tests on multiple financial tasks through the combination of natural language instructions.
no code implementations • 23 Dec 2023 • Chenjiao Tan, Qian Cao, Yiwei Li, Jielu Zhang, Xiao Yang, Huaqin Zhao, Zihao Wu, Zhengliang Liu, Hao Yang, Nemin Wu, Tao Tang, Xinyue Ye, Lilong Chai, Ninghao Liu, Changying Li, Lan Mu, Tianming Liu, Gengchen Mai
The advent of large language models (LLMs) has heightened interest in their potential for multimodal applications that integrate language and vision.
1 code implementation • 23 Dec 2023 • Hengrui Gu, Kaixiong Zhou, Xiaotian Han, Ninghao Liu, Ruobing Wang, Xin Wang
Multi-hop question answering (MQA) is one of the challenging tasks to evaluate machine's comprehension and reasoning abilities, where large language models (LLMs) have widely achieved the human-comparable performance.
no code implementations • 30 Nov 2023 • Gyeong-Geon Lee, Ehsan Latif, Xuansheng Wu, Ninghao Liu, Xiaoming Zhai
We found a more balanced accuracy across different proficiency categories when CoT was used with a scoring rubric, highlighting the importance of domain-specific reasoning in enhancing the effectiveness of LLMs in scoring tasks.
no code implementations • 29 Nov 2023 • Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang
However, ViTs suffer from issues with explanation faithfulness, as their focal points are fragile to adversarial attacks and can be easily changed with even slight perturbations on the input image.
no code implementations • 30 Oct 2023 • Zhengliang Liu, Yiwei Li, Qian Cao, Junwen Chen, Tianze Yang, Zihao Wu, John Hale, John Gibbs, Khaled Rasheed, Ninghao Liu, Gengchen Mai, Tianming Liu
Recent advances in artificial general intelligence (AGI), particularly large language models and creative image generation systems have demonstrated impressive capabilities on diverse tasks spanning the arts and humanities.
no code implementations • 19 Oct 2023 • Hua Tang, Lu Cheng, Ninghao Liu, Mengnan Du
While the accuracy-fairness trade-off has been frequently observed in the literature of fair machine learning, rigorous theoretical analyses have been scarce.
no code implementations • 16 Oct 2023 • Chenxu Zhao, Wei Qian, Yucheng Shi, Mengdi Huai, Ninghao Liu
Deep neural networks have exhibited remarkable performance across a wide range of real-world tasks.
1 code implementation • 30 Sep 2023 • Xuansheng Wu, Wenlin Yao, Jianshu Chen, Xiaoman Pan, Xiaoyang Wang, Ninghao Liu, Dong Yu
In this work, we investigate how the instruction tuning adjusts pre-trained models with a focus on intrinsic changes.
no code implementations • 27 Sep 2023 • Yucheng Shi, Shaochen Xu, Zhengliang Liu, Tianming Liu, Xiang Li, Ninghao Liu
Focusing on medical QA using the MedQA-SMILE dataset, we evaluate the impact of different retrieval models and the number of facts provided to the LLM.
no code implementations • 18 Sep 2023 • Zhengliang Liu, Peilong Wang, Yiwei Li, Jason Holmes, Peng Shu, Lian Zhang, Chenbin Liu, Ninghao Liu, Dajiang Zhu, Xiang Li, Quanzheng Li, Samir H. Patel, Terence T. Sio, Tianming Liu, Wei Liu
This paper presents RadOnc-GPT, a large language model specialized for radiation oncology through advanced tuning methods.
no code implementations • 17 Sep 2023 • Zirui He, Huiqi Deng, Haiyan Zhao, Ninghao Liu, Mengnan Du
Recent research has shown that large language models rely on spurious correlations in the data for natural language understanding (NLU) tasks.
Natural Language Understanding Out-of-Distribution Generalization
no code implementations • 14 Sep 2023 • Fei Dou, Jin Ye, Geng Yuan, Qin Lu, Wei Niu, Haijian Sun, Le Guan, Guoyu Lu, Gengchen Mai, Ninghao Liu, Jin Lu, Zhengliang Liu, Zihao Wu, Chenjiao Tan, Shaochen Xu, Xianqiao Wang, Guoming Li, Lilong Chai, Sheng Li, Jin Sun, Hongyue Sun, Yunli Shao, Changying Li, Tianming Liu, WenZhan Song
Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas.
no code implementations • 2 Sep 2023 • Haiyan Zhao, Hanjie Chen, Fan Yang, Ninghao Liu, Huiqi Deng, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Mengnan Du
For each paradigm, we summarize the goals and dominant approaches for generating local explanations of individual predictions and global explanations of overall model knowledge.
1 code implementation • 18 Aug 2023 • Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
By considering embeddings encompassing graph topology and attribute information as reconstruction targets, our model could capture more generalized and comprehensive knowledge.
1 code implementation • 8 Aug 2023 • Zihan Guan, Mengnan Du, Ninghao Liu
An emerging detection strategy in the vision and NLP domains is based on an intriguing phenomenon: when training models on a mixture of backdoor and clean samples, the loss on backdoor samples drops significantly faster than on clean samples, allowing backdoor samples to be easily detected by selecting samples with the lowest loss values.
no code implementations • 21 Jul 2023 • Zihan Guan, Zihao Wu, Zhengliang Liu, Dufan Wu, Hui Ren, Quanzheng Li, Xiang Li, Ninghao Liu
Participant recruitment based on unstructured medical texts such as clinical notes and radiology reports has been a challenging yet important task for the cohort establishment in clinical research.
no code implementations • 10 Jul 2023 • Haixing Dai, Lu Zhang, Lin Zhao, Zihao Wu, Zhengliang Liu, David Liu, Xiaowei Yu, Yanjun Lyu, Changying Li, Ninghao Liu, Tianming Liu, Dajiang Zhu
With the popularity of deep neural networks (DNNs), model interpretability is becoming a critical concern.
1 code implementation • 3 Jul 2023 • Yucheng Shi, Kaixiong Zhou, Ninghao Liu
Then, we design two data augmentation schemes on graphs for perturbing structural and feature information, respectively.
1 code implementation • 29 Jun 2023 • Xuansheng Wu, Huachi Zhou, Yucheng Shi, Wenlin Yao, Xiao Huang, Ninghao Liu
To evaluate our approach, we introduce a cold-start recommendation benchmark, and the results demonstrate that the enhanced small language models can achieve comparable cold-start recommendation performance to that of large models with only $17\%$ of the inference time.
no code implementations • 20 Jun 2023 • Saed Rezayi, Zhengliang Liu, Zihao Wu, Chandra Dhakal, Bao Ge, Haixing Dai, Gengchen Mai, Ninghao Liu, Chen Zhen, Tianming Liu, Sheng Li
ChatGPT has shown to be a strong baseline in many NLP tasks, and we believe it has the potential to improve our model in the task of semantic matching and enhance our model's understanding of food-related concepts and relationships.
1 code implementation • 18 Jun 2023 • Shuang Zhou, Xiao Huang, Ninghao Liu, Huachi Zhou, Fu-Lai Chung, Long-Kai Huang
In this paper, we base on the phenomenon and propose a general and novel research problem of generalized graph anomaly detection that aims to effectively identify anomalies on both the training-domain graph and unseen testing graph to eliminate potential dangers.
no code implementations • 9 Jun 2023 • Yao Rong, Guanchu Wang, Qizhang Feng, Ninghao Liu, Zirui Liu, Enkelejda Kasneci, Xia Hu
A strategy of subgraph sampling is designed in LARA to improve the scalability of the training process.
no code implementations • 23 May 2023 • Ziqi Zhao, Yucheng Shi, Shushan Wu, Fan Yang, WenZhan Song, Ninghao Liu
Deep learning models developed for time-series associated tasks have become more widely researched nowadays.
1 code implementation • ICLR 2022 • Qizhang Feng, Ninghao Liu, Fan Yang, Ruixiang Tang, Mengnan Du, Xia Hu
Graph Neural Networks (GNNs) are gaining extensive attention for their application in graph data.
no code implementations • 5 May 2023 • Zihan Guan, Mengxuan Hu, Zhongliang Zhou, Jielu Zhang, Sheng Li, Ninghao Liu
Recently, the Segment Anything Model (SAM) has gained significant attention as an image segmentation foundation model due to its strong performance on various downstream tasks.
no code implementations • 24 Apr 2023 • Ehsan Latif, Gengchen Mai, Matthew Nyaaba, Xuansheng Wu, Ninghao Liu, Guoyu Lu, Sheng Li, Tianming Liu, Xiaoming Zhai
AGI, driven by the recent large pre-trained models, represents a significant leap in the capability of machines to perform tasks that require human-level intelligence, such as reasoning, problem-solving, decision-making, and even understanding human emotions and social interactions.
no code implementations • 21 Apr 2023 • Guanchu Wang, Ninghao Liu, Daochen Zha, Xia Hu
Anomaly detection, where data instances are discovered containing feature patterns different from the majority, plays a fundamental role in various applications.
no code implementations • 13 Apr 2023 • Gengchen Mai, Weiming Huang, Jin Sun, Suhang Song, Deepak Mishra, Ninghao Liu, Song Gao, Tianming Liu, Gao Cong, Yingjie Hu, Chris Cundy, Ziyuan Li, Rui Zhu, Ni Lao
In this work, we explore the promises and challenges of developing multimodal foundation models for GeoAI.
no code implementations • 12 Apr 2023 • Guoyu Lu, Sheng Li, Gengchen Mai, Jin Sun, Dajiang Zhu, Lilong Chai, Haijian Sun, Xianqiao Wang, Haixing Dai, Ninghao Liu, Rui Xu, Daniel Petti, Tianming Liu, Changying Li
Artificial General Intelligence (AGI) is poised to revolutionize a variety of sectors, including healthcare, finance, transportation, and education.
1 code implementation • 20 Mar 2023 • Ruixiang Tang, Qizhang Feng, Ninghao Liu, Fan Yang, Xia Hu
To overcome this challenge, we introduce a clean-label backdoor watermarking framework that uses imperceptible perturbations to replace mislabeled samples.
no code implementations • 13 Mar 2023 • Xuansheng Wu, Kaixiong Zhou, Mingchen Sun, Xin Wang, Ninghao Liu
In particular, we introduce the basic concepts of graph prompt learning, organize the existing work of designing graph prompting functions, and describe their applications and future challenges.
no code implementations • 25 Feb 2023 • Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, Hongmin Cai, Lichao Sun, Quanzheng Li, Dinggang Shen, Tianming Liu, Xiang Li
Text data augmentation is an effective strategy for overcoming the challenge of limited sample sizes in many natural language processing (NLP) tasks.
1 code implementation • 24 Feb 2023 • Xuansheng Wu, Zhiyi Zhao, Ninghao Liu
We propose a novel non-parametric/un-trainable language model, named Non-Parametric Pairwise Attention Random Walk Model (NoPPA), to generate sentence embedding only with pre-trained word embedding and pre-counted word frequency.
1 code implementation • 20 Jan 2023 • Xuansheng Wu, Xinyu He, Tianming Liu, Ninghao Liu, Xiaoming Zhai
Developing models to automatically score students' written responses to science problems is critical for science education.
1 code implementation • 23 Dec 2022 • Qiaoyu Tan, Xin Zhang, Ninghao Liu, Daochen Zha, Li Li, Rui Chen, Soo-Hyun Choi, Xia Hu
To bridge the gap, we introduce a Personalized Subgraph Selector (PS2) as a plug-and-play framework to automatically, personally, and inductively identify optimal subgraphs for different edges when performing GNNLP.
1 code implementation • 25 Nov 2022 • Yushun Dong, Song Wang, Jing Ma, Ninghao Liu, Jundong Li
In this paper, we study a novel problem of interpreting GNN unfairness through attributing it to the influence of training nodes.
no code implementations • 23 Nov 2022 • Lijie Hu, Yixin Liu, Ninghao Liu, Mengdi Huai, Lichao Sun, Di Wang
Results show that SEAT is more stable against different perturbations and randomness while also keeps the explainability of attention, which indicates it is a more faithful explanation.
no code implementations • 5 Nov 2022 • Hongmin Cai, Wenxiong Liao, Zhengliang Liu, Yiyang Zhang, Xiaoke Huang, Siqi Ding, Hui Ren, Zihao Wu, Haixing Dai, Sheng Li, Lingfei Wu, Ninghao Liu, Quanzheng Li, Tianming Liu, Xiang Li
In this framework, we apply distant-supervision on cross-domain knowledge graph adaptation.
1 code implementation • ACM International Conference on Information & Knowledge Management (CIKM) 2022 • Yili Wang, Kaixiong Zhou, Rui Miao, Ninghao Liu, Xin Wang
To bridge the gap between large-scale graph training and contrastive learning, we propose adaptive subgraph contrastive learning (AdaGCL).
1 code implementation • 21 Sep 2022 • Shuang Zhou, Xiao Huang, Ninghao Liu, Fu-Lai Chung, Long-Kai Huang
In this paper, we base on the phenomenon and propose a general and novel research problem of generalized graph anomaly detection that aims to effectively identify anomalies on both the training-domain graph and unseen testing graph to eliminate potential dangers.
1 code implementation • 5 Aug 2022 • Guanchu Wang, Zirui Liu, Zhimeng Jiang, Ninghao Liu, Na Zou, Xia Hu
Activation compressed training provides a solution towards reducing the memory cost of training deep neural networks~(DNNs).
1 code implementation • 20 Jul 2022 • Guanchu Wang, Mengnan Du, Ninghao Liu, Na Zou, Xia Hu
Existing work on fairness modeling commonly assumes that sensitive attributes for all instances are fully available, which may not be true in many real-world applications due to the high cost of acquiring sensitive information.
1 code implementation • SIAM International Conference on Data Mining 2022 • Shuang Zhou, Xiao Huang, Ninghao Liu, Qiaoyu Tan, Fu-Lai Chung
Network anomaly detection is a crucial task since a few anomalies can cause huge losses.
1 code implementation • 15 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu
To this end, we propose $\mathcal{G}$-Mixup to augment graphs for graph classification by interpolating the generator (i. e., graphon) of different classes of graphs.
no code implementations • 13 Feb 2022 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Qingquan Song, Jundong Li, Xia Hu
Learning discriminative node representations benefits various downstream tasks in graph analysis such as community detection and node classification.
1 code implementation • 7 Jan 2022 • Qiaoyu Tan, Ninghao Liu, Xiao Huang, Rui Chen, Soo-Hyun Choi, Xia Hu
We introduce a novel masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
no code implementations • 8 Nov 2021 • Ruixiang Tang, Ninghao Liu, Fan Yang, Na Zou, Xia Hu
Explainable machine learning attracts increasing attention as it improves transparency of models, which is helpful for machine learning to be trusted in real applications.
no code implementations • 4 Nov 2021 • Mingyang Wan, Daochen Zha, Ninghao Liu, Na Zou
Machine learning models are becoming pervasive in high-stakes applications.
no code implementations • 29 Sep 2021 • Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu
To this end, we propose $\mathcal{G}$-Mixup to augment graphs for graph classification by interpolating the generator (i. e., graphon) of different classes of graphs.
no code implementations • 30 Aug 2021 • Kaixiong Zhou, Ninghao Liu, Fan Yang, Zirui Liu, Rui Chen, Li Li, Soo-Hyun Choi, Xia Hu
Graph neural networks (GNNs), which learn the node representations by recursively aggregating information from its neighbors, have become a predominant computational tool in many domains.
1 code implementation • 11 Aug 2021 • Yushun Dong, Ninghao Liu, Brian Jalaian, Jundong Li
We then develop a framework EDITS to mitigate the bias in attributed networks while maintaining the performance of GNNs in downstream tasks.
no code implementations • 22 Mar 2021 • Raj Vardhan, Ninghao Liu, Phakpoom Chinprutthiwong, Weijie Fu, Zhenyu Hu, Xia Ben Hu, Guofei Gu
Several defense methods have been proposed against adversarial attacks to detect adversarial examples at test time or to make machine learning models more robust.
1 code implementation • 18 Feb 2021 • Qiaoyu Tan, Jianwei Zhang, Jiangchao Yao, Ninghao Liu, Jingren Zhou, Hongxia Yang, Xia Hu
Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool and output multiple embeddings accordingly.
1 code implementation • 18 Feb 2021 • Qiaoyu Tan, Jianwei Zhang, Ninghao Liu, Xiao Huang, Hongxia Yang, Jingren Zhou, Xia Hu
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
no code implementations • 18 Jan 2021 • Fan Yang, Ninghao Liu, Mengnan Du, Xia Hu
With the wide use of deep neural networks (DNN), model interpretability has become a critical concern, since explainable decisions are preferred in high-stake scenarios.
1 code implementation • NeurIPS 2020 • Kion Fallah, Adam Willats, Ninghao Liu, Christopher Rozell
Unfortunately, current proposals for sparse coding in the compressed space require a centralized compression process (i. e., dense random matrix) that is biologically unrealistic due to local wiring constraints observed in neural circuits.
no code implementations • 16 Sep 2020 • Ninghao Liu, Yunsong Meng, Xia Hu, Tie Wang, Bo Long
Recent years have witnessed an increasing number of interpretation methods being developed for improving transparency of NLP models.
no code implementations • 21 Aug 2020 • Ninghao Liu, Yong Ge, Li Li, Xia Hu, Rui Chen, Soo-Hyun Choi
Different from previous work, in our model, factor discovery and representation learning are simultaneously conducted, and we are able to handle extra attribute information and knowledge.
1 code implementation • 15 Jun 2020 • Ruixiang Tang, Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu
In this paper, we investigate a specific security problem called trojan attack, which aims to attack deployed DNN systems relying on the hidden trigger patterns inserted by malicious hackers.
no code implementations • 23 Apr 2020 • Ninghao Liu, Mengnan Du, Ruocheng Guo, Huan Liu, Xia Hu
In this paper, we review recent work on adversarial attacks and defenses, particularly from the perspective of machine learning interpretation.
no code implementations • 4 Mar 2020 • Qiaoyu Tan, Ninghao Liu, Xing Zhao, Hongxia Yang, Jingren Zhou, Xia Hu
In this work, we investigate the problem of hashing with graph neural networks (GNNs) for high quality retrieval, and propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
no code implementations • 25 Sep 2019 • Weijie Fu, Meng Wang, Mengnan Du, Ninghao Liu, Shijie Hao, Xia Hu
Existing local explanation methods provide an explanation for each decision of black-box classifiers, in the form of relevance scores of features according to their contributions.
no code implementations • 13 Aug 2019 • Mengnan Du, Ninghao Liu, Fan Yang, Xia Hu
Recent explainability related studies have shown that state-of-the-art DNNs do not always adopt correct evidences to make decisions.
no code implementations • 11 Aug 2019 • Yuening Li, Ninghao Liu, Jundong Li, Mengnan Du, Xia Hu
To this end, we propose a novel deep structured anomaly detection framework to identify the cross-modal anomalies embedded in the data.
1 code implementation • 25 May 2019 • Ninghao Liu, Qiaoyu Tan, Yuening Li, Hongxia Yang, Jingren Zhou, Xia Hu
Network embedding models are powerful tools in mapping nodes in a network into continuous vector-space representations in order to facilitate subsequent tasks such as classification and link prediction.
no code implementations • 18 Apr 2019 • Qiaoyu Tan, Ninghao Liu, Xia Hu
First, we introduce the basic models for learning node representations in homogeneous networks.
no code implementations • 27 Mar 2019 • Mengnan Du, Ninghao Liu, Fan Yang, Shuiwang Ji, Xia Hu
REAT decomposes the final prediction of a RNN into additive contribution of each word in the input text.
no code implementations • 31 Jul 2018 • Mengnan Du, Ninghao Liu, Xia Hu
Interpretable machine learning tackles the important problem that humans cannot understand the behaviors of complex machine learning models and how these models arrive at a particular decision.
no code implementations • 19 Mar 2018 • Mengnan Du, Ninghao Liu, Qingquan Song, Xia Hu
While deep neural networks (DNN) have become an effective computational tool, the prediction results are often criticized by the lack of interpretability, which is essential in many real-world applications such as health informatics.
no code implementations • 28 Nov 2017 • Ninghao Liu, Donghwa Shin, Xia Hu
Outlier detection plays an essential role in many data-driven applications to identify isolated instances that are different from the majority.