1 code implementation • 28 Feb 2024 • Guangji Bai, Yijiang Li, Chen Ling, Kibaek Kim, Liang Zhao
The transformative impact of large language models (LLMs) like LLaMA and GPT on natural language processing is countered by their prohibitive computational demands.
1 code implementation • 24 Feb 2024 • Nguyen Do, Tanmoy Chowdhury, Chen Ling, Liang Zhao, My T. Thai
Multiplex influence maximization (MIM) asks us to identify a set of seed users such as to maximize the expected number of influenced users in a multiplex network.
no code implementations • 20 Feb 2024 • Yifei Zhang, Bo Pan, Chen Ling, Yuntong Hu, Liang Zhao
The deployment and application of Large Language Models (LLMs) is hindered by their memory inefficiency, computational demands, and the high costs of API inferences.
no code implementations • 16 Feb 2024 • Mingchen Li, Chen Ling, Rui Zhang, Liang Zhao
To address this, in this work, we introduce a Condensed Transition Graph Framework for Zero-Shot Link Prediction (CTLP), which encodes all the paths' information in linear time complexity to predict unseen relations between entities, attaining both efficiency and information preservation.
1 code implementation • 15 Feb 2024 • Chen Ling, Xujiang Zhao, Xuchao Zhang, Wei Cheng, Yanchi Liu, Yiyou Sun, Mika Oishi, Takao Osaki, Katsushi Matsuda, Jie Ji, Guangji Bai, Liang Zhao, Haifeng Chen
Existing works have been devoted to quantifying the uncertainty in LLM's response, but they often overlook the complex nature of LLMs and the uniqueness of in-context learning.
no code implementations • 16 Jan 2024 • Jiayu Chang, Shiyu Wang, Chen Ling, Zhaohui Qin, Liang Zhao
The intricate relationship between genetic variation and human diseases has been a focal point of medical research, evidenced by the identification of risk genes regarding specific diseases.
1 code implementation • 1 Jan 2024 • Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, Carl Yang, Yue Cheng, Liang Zhao
We categorize methods based on their optimization focus: computational, memory, energy, financial, and network resources and their applicability across various stages of an LLM's lifecycle, including architecture design, pretraining, finetuning, and system design.
no code implementations • 18 Oct 2023 • Chen Ling, Xuchao Zhang, Xujiang Zhao, Yanchi Liu, Wei Cheng, Mika Oishi, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao
In this work, we leverage pre-trained language models to iteratively retrieve reasoning paths on the external knowledge base, which does not require task-specific supervision.
no code implementations • 7 Sep 2023 • Chen Ling, Xujiang Zhao, Xuchao Zhang, Yanchi Liu, Wei Cheng, Haoyu Wang, Zhengzhang Chen, Takao Osaki, Katsushi Matsuda, Haifeng Chen, Liang Zhao
Open Information Extraction (OIE) task aims at extracting structured facts from unstructured text, typically in the form of (subject, relation, object) triples.
Ranked #6 on Open Information Extraction on OIE2016
no code implementations • 6 Sep 2023 • Junruo Gao, Chen Ling, Carl Yang, Liang Zhao
Online health communities (OHCs) are forums where patients with similar conditions communicate their experiences and provide moral support.
no code implementations • 7 Jun 2023 • Hejie Cui, Jiaying Lu, Shiyu Wang, ran Xu, Wenjing Ma, Shaojun Yu, Yue Yu, Xuan Kan, Chen Ling, Tianfan Fu, Liang Zhao, Joyce Ho, Fei Wang, Carl Yang
This work aims to serve as a valuable resource for understanding the potential and opportunities of HKG in health research.
no code implementations • 30 May 2023 • Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao
In this article, we present a comprehensive survey on domain specification techniques for large language models, an emerging direction critical for large language model applications.
1 code implementation • 1 May 2023 • Chen Ling, Junji Jiang, Junxiang Wang, My Thai, Lukas Xue, James Song, Meikang Qiu, Liang Zhao
Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users.
no code implementations • 4 Feb 2023 • Tanmoy Chowdhury, Chen Ling, Xuchao Zhang, Xujiang Zhao, Guangji Bai, Jian Pei, Haifeng Chen, Liang Zhao
Knowledge-enhanced neural machine reasoning has garnered significant attention as a cutting-edge yet challenging research area with numerous practical applications.
1 code implementation • 26 Dec 2022 • Guangji Bai, Chen Ling, Yuyang Gao, Liang Zhao
Specifically, we innovatively propose to store the part of the image most important to the tasks in episodic memory by saliency map extraction and memory encoding.
1 code implementation • 19 Nov 2022 • Chen Ling, Tanmoy Chowdhury, Junji Jiang, Junxiang Wang, Xuchao Zhang, Haifeng Chen, Liang Zhao
As the most well-known computational method of analogical reasoning, Structure-Mapping Theory (SMT) abstracts both target and base subjects into relational graphs and forms the cognitive process of analogical reasoning by finding a corresponding subgraph (i. e., correspondence) in the target graph that is aligned with the base graph.
1 code implementation • 24 Jun 2022 • Chen Ling, Junji Jiang, Junxiang Wang, Liang Zhao
Different from most traditional source localization methods, this paper focuses on a probabilistic manner to account for the uncertainty of different candidate sources.
1 code implementation • 21 May 2022 • Guangji Bai, Chen Ling, Liang Zhao
Temporal domain generalization is a promising yet extremely challenging area where the goal is to learn models under temporally changing data distributions and generalize to unseen data distributions following the trends of the change.
no code implementations • 3 Nov 2021 • Chen Ling, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini
We do so vis-\`a-vis three research hypotheses; namely, that: 1) the video content, 2) TikTok's recommendation algorithm, and 3) the popularity of the video creator contribute to virality.
no code implementations • 18 Oct 2021 • Chenjian Pan, Chen Ling, Hongjin He, Liqun Qi, Yanwei Xu
By the multi-dimensional nature of color images and videos, in this paper, we propose a novel tensor completion approach, which is able to efficiently explore the sparsity of tensor data under the discrete cosine transform (DCT).
no code implementations • 1 Oct 2020 • Chenjian Pan, Chen Ling, Hongjin He, Liqun Qi, Yanwei Xu
Our model possesses a sparse regularization term to promote a sparse core tensor of the Tucker decomposition, which is beneficial for tensor data compression.
no code implementations • 8 Apr 2020 • Fatemeh Tahmasbi, Leonard Schild, Chen Ling, Jeremy Blackburn, Gianluca Stringhini, Yang Zhang, Savvas Zannettou
Finally, we find interesting differences in the context in which words related to Chinese people are used on the Web before and after the COVID-19 outbreak: on Twitter we observe a shift towards blaming China for the situation, while on /pol/ we find a shift towards using more (and new) Sinophobic slurs.
Word Embeddings Social and Information Networks Computers and Society
1 code implementation • 16 Mar 2020 • Chen Ling, Ruiqi Wang, Guangmo Tong
Based on the nature of the grid, we leverage the Temporal Convolution Network to learn the dynamics at the grid level.
Social and Information Networks
no code implementations • 12 Mar 2020 • Chen Ling, Guangmo Tong, Mozi Chen
Online discussion forum creates an asynchronous conversation environment for online users to exchange ideas and share opinions through a unique thread-reply communication mode.
Social and Information Networks
no code implementations • 6 Jan 2020 • Guangmo Tong, Ruiqi Wang, Chen Ling, Zheng Dong, Xiang Li
The well-known influence maximization problem aims at maximizing the influence of one information cascade in a social network by selecting appropriate seed users prior to the diffusion process.
Social and Information Networks
no code implementations • 12 Mar 2015 • Wang Weiqing, Yin Hongzhi, Chen Ling, Sun Yizhou, Sadiq Shazia, Zhou Xiaofang
Geo-SAGE considers both user personal interests and the preference of the crowd in the target region, by exploiting both the co-occurrence pattern of spatial items and the content of spatial items.