1 code implementation • 25 Apr 2024 • Liang Zhang, Anwen Hu, Haiyang Xu, Ming Yan, Yichen Xu, Qin Jin, Ji Zhang, Fei Huang
Charts are important for presenting and explaining complex data relationships.
no code implementations • 2 Apr 2024 • Ning Wang, Guangming Zhu, HS Li, Liang Zhang, Syed Afaq Ali Shah, Mohammed Bennamoun
Extensive experiments on two complex video action datasets, Charades & CAD-120, validates the improved performance and interpretability of our LaIAR framework.
no code implementations • 28 Mar 2024 • Binzong Geng, ZhaoXin Huan, Xiaolu Zhang, Yong He, Liang Zhang, Fajie Yuan, Jun Zhou, Linjian Mo
However, we argue that a critical obstacle remains in deploying LLMs for practical use: the efficiency of LLMs when processing long textual user behaviors.
1 code implementation • 19 Mar 2024 • Anwen Hu, Haiyang Xu, Jiabo Ye, Ming Yan, Liang Zhang, Bo Zhang, Chen Li, Ji Zhang, Qin Jin, Fei Huang, Jingren Zhou
In this work, we emphasize the importance of structure information in Visual Document Understanding and propose the Unified Structure Learning to boost the performance of MLLMs.
no code implementations • 19 Mar 2024 • Liang Zhang, Niao He, Michael Muehlebach
In this work, we propose a simple primal method, termed Constrained Gradient Method (CGM), for addressing functional constrained variational inequality problems, without necessitating any information on the optimal Lagrange multipliers.
no code implementations • 13 Mar 2024 • Ang Li, Qiugen Xiao, Peng Cao, Jian Tang, Yi Yuan, Zijie Zhao, Xiaoyuan Chen, Liang Zhang, Xiangyang Li, Kaitong Yang, Weidong Guo, Yukang Gan, Xu Yu, Daniell Wang, Ying Shan
Using ChatGPT as a labeler to provide feedback on open-domain prompts in RLAIF training, we observe an increase in human evaluators' preference win ratio for model responses, but a decrease in evaluators' satisfaction rate.
no code implementations • 4 Mar 2024 • Liang Zhang, Jionghao Lin, Conrad Borchers, John Sabatini, John Hollander, Meng Cao, Xiangen Hu
This research is motivated by the potential of LLMs to predict learning performance based on its inherent reasoning and computational capabilities.
1 code implementation • 22 Feb 2024 • Zihao Yue, Liang Zhang, Qin Jin
In this paper, we explore a new angle of this issue: overly detailed training data hinders the model's ability to timely terminate generation, leading to continued outputs beyond visual perception limits.
no code implementations • 16 Feb 2024 • Xinjian Zhao, Liang Zhang, Yang Liu, Ruocheng Guo, Xiangyu Zhao
To address this challenge, we propose an innovative framework: Adversarial Curriculum Graph Contrastive Learning (ACGCL), which capitalizes on the merits of pair-wise augmentation to engender graph-level positive and negative samples with controllable similarity, alongside subgraph contrastive learning to discern effective graph patterns therein.
no code implementations • 14 Feb 2024 • Liang Zhang, Zhelun Chen, Vitaly Ford
The findings advocate a multidisciplinary approach in future artificial intelligence research, with implications extending beyond building energy modeling to other specialized engineering modeling.
no code implementations • 14 Feb 2024 • Liang Zhang, Zhelun Chen
The potential of Machine Learning Control (MLC) in HVAC systems is hindered by its opaque nature and inference mechanisms, which is challenging for users and modelers to fully comprehend, ultimately leading to a lack of trust in MLC-based decision-making.
no code implementations • 3 Feb 2024 • Ziyi Zhou, Liang Zhang, Yuanxi Yu, Mingchen Li, Liang Hong, Pan Tan
Accurately modeling the protein fitness landscapes holds great importance for protein engineering.
no code implementations • 29 Jan 2024 • Liang Zhang, Jionghao Lin, Conrad Borchers, Meng Cao, Xiangen Hu
Learning performance data (e. g., quiz scores and attempts) is significant for understanding learner engagement and knowledge mastery level.
no code implementations • 27 Jan 2024 • Liang Zhang, Katherine Jijo, Spurthi Setty, Eden Chung, Fatima Javid, Natan Vidra, Tommy Clifford
Large Language Models (LLMs) generate responses to questions; however, their effectiveness is often hindered by sub-optimal quality of answers and occasional failures to provide accurate responses to questions.
no code implementations • 17 Jan 2024 • Natan Vidra, Thomas Clifford, Katherine Jijo, Eden Chung, Liang Zhang
In the realm of artificial intelligence, where a vast majority of data is unstructured, obtaining substantial amounts of labeled data to train supervised machine learning models poses a significant challenge.
no code implementations • 10 Jan 2024 • Zhanliang He, Nuoye Xiong, Hongsheng Li, Peiyi Shen, Guangming Zhu, Liang Zhang
Through experimental validation, based on this interaction interface, NN can provide humans with easily understandable explanations of the reasoning process.
no code implementations • 9 Jan 2024 • Yishuang Tian, Ning Wang, Liang Zhang
The current deep neural network algorithm still stays in the end-to-end training supervision method like Image-Label pairs, which makes traditional algorithm is difficult to explain the reason for the results, and the prediction logic is difficult to understand and analyze.
no code implementations • 9 Jan 2024 • Jiajun Liu, Siyuan Wang, Guangming Zhu, Liang Zhang, Ning li, Eryang Gao
We explore the performance of the model, including using styles randomly sampled from a prior normal distribution to generate images with various free-hand sketching styles, disentangling the painters' styles from known free-hand sketches to generate images with specific styles, and generating images of unknown classes that are not in the training set.
no code implementations • 8 Jan 2024 • Qi Wang, Fengchao Zhu, Guangming Zhu, Liang Zhang, Ning li, Eryang Gao
Gesture recognition is an indispensable component of natural and efficient human-computer interaction technology, particularly in desktop-level applications, where it can significantly enhance people's productivity.
1 code implementation • 8 Jan 2024 • Huanyu Liu, JianFeng Cai, Tingjia Zhang, Hongsheng Li, Siyuan Wang, Guangming Zhu, Syed Afaq Ali Shah, Mohammed Bennamoun, Liang Zhang
Automated conversion methods are essential to overcome manual conversion challenges.
1 code implementation • 25 Dec 2023 • Rui Zhao, Liang Zhang, Biao Fu, Cong Hu, Jinsong Su, Yidong Chen
The first KL divergence optimizes the conditional variational autoencoder and regularizes the encoder outputs, while the second KL divergence performs a self-distillation from the posterior path to the prior path, ensuring the consistency of decoder outputs.
no code implementations • 18 Dec 2023 • Liang Zhang, Zhelun Chen
In recent years, the rapid advancement and impressive capabilities of Large Language Models (LLMs) have been evident across various domains.
1 code implementation • 13 Dec 2023 • Guangming Zhu, Siyuan Wang, Tianci Wu, Liang Zhang
Humans can recognize varied sketches of a category easily by identifying the concurrence and layout of the intrinsic semantic components of the category, since humans draw free-hand sketches based a common consensus that which types of semantic components constitute each sketch category.
1 code implementation • 30 Nov 2023 • Guangming Zhu, Siyuan Wang, Qing Cheng, Kelong Wu, Hao Li, Liang Zhang
With the recent surge in the use of touchscreen devices, free-hand sketching has emerged as a promising modality for human-computer interaction.
no code implementations • 27 Nov 2023 • Xi Wang, Xianyao Ling, Tom Zhang, Xuecao Li, Shaolan Wang, Zhixing Li, Liang Zhang, Peng Gong
This study demonstrates the effectiveness and superiority of the joint fine-tuning method using Prefix and LoRA for ChatGLM in the urban renewal knowledge QA tasks.
no code implementations • 4 Nov 2023 • Yanyu Chen, Yao Yao, Wai Kin Victor Chan, Li Xiao, Kai Zhang, Liang Zhang, Yun Ye
In this paper, we present a scalable and efficient paradigm to address data sparsity and cold-start issues in CDR, named CDR-Adapter, by decoupling the original recommendation model from the mapping function, without requiring re-engineering the network structure.
no code implementations • NeurIPS 2023 • Liang Zhang, Junchi Yang, Amin Karbasi, Niao He
Particularly, given the inexact initialization oracle, our regularization-based algorithms achieve the best of both worlds - optimal reproducibility and near-optimal gradient complexity - for minimization and minimax optimization.
no code implementations • 14 Oct 2023 • Liang Zhang, Bingcong Li, Kiran Koshy Thekumparampil, Sewoong Oh, Niao He
The widespread practice of fine-tuning large language models (LLMs) on domain-specific data faces two major challenges in memory and privacy.
no code implementations • 2 Oct 2023 • Ties van Rozendaal, Tushar Singhal, Hoang Le, Guillaume Sautiere, Amir Said, Krishna Buska, Anjuman Raha, Dimitris Kalatzis, Hitarth Mehta, Frank Mayer, Liang Zhang, Markus Nagel, Auke Wiggers
This work presents the first neural video codec that decodes 1080p YUV420 video in real time on a mobile device.
no code implementations • 31 Aug 2023 • ZhaoXin Huan, Ke Ding, Ang Li, Xiaolu Zhang, Xu Min, Yong He, Liang Zhang, Jun Zhou, Linjian Mo, Jinjie Gu, Zhongyi Liu, Wenliang Zhong, Guannan Zhang
3) AntM$^{2}$C provides 1 billion CTR data with 200 features, including 200 million users and 6 million items.
no code implementations • 26 Aug 2023 • Jiaxi Lv, Liang Zhang, Yi Huang, Jiancheng Huang, Shifeng Chen
To this end, DiffAtt uses the difference between two graph-level embeddings as an attentional mechanism to capture the graph structural difference of the two graphs.
no code implementations • ICCV 2023 • Anwen Hu, ShiZhe Chen, Liang Zhang, Qin Jin
To overcome this limitation, we propose a novel task called Embodied Captioning, which equips visual captioning models with navigation capabilities, enabling them to actively explore the scene and reduce visual ambiguity from suboptimal viewpoints.
1 code implementation • ICCV 2023 • Liang Zhang, Nathaniel Xu, Pengfei Yang, Gaojie Jin, Cheng-Chao Huang, Lijun Zhang
Firstly, the previous definitions of robustness in trajectory prediction are ambiguous.
no code implementations • 24 Jul 2023 • Pan Tan, Mingchen Li, Yuanxi Yu, Fan Jiang, Lirong Zheng, Banghao Wu, Xinyu Sun, Liqi Kang, Jie Song, Liang Zhang, Yi Xiong, Wanli Ouyang, Zhiqiang Hu, Guisheng Fan, Yufeng Pei, Liang Hong
Designing protein mutants with high stability and activity is a critical yet challenging task in protein engineering.
1 code implementation • ICCV 2023 • Tianyi Shi, Xiaohuan Ding, Liang Zhang, Xin Yang
Curvilinear object segmentation is critical for many applications.
1 code implementation • 20 May 2023 • Zihao Yue, Qi Zhang, Anwen Hu, Liang Zhang, Ziheng Wang, Qin Jin
Closer to real scenarios, the Movie Clip Narrating (MCN) task in our benchmark asks models to generate role-aware narration paragraphs for complete movie clips where no actors are speaking.
1 code implementation • 10 May 2023 • Anwen Hu, ShiZhe Chen, Liang Zhang, Qin Jin
Existing metrics only provide a single score to measure caption qualities, which are less explainable and informative.
no code implementations • 3 May 2023 • Davide Coluzzi, Valentina Bordin, Massimo Walter Rivolta, Igor Fortel, Liang Zhang, Alex Leow, Giuseppe Baselli
The XAI assessment was conducted across 132 brain parcels, extracted from a combination of the Harvard-Oxford and AAL brain atlases, and compared to well-known pathological regions to measure adherence to domain knowledge.
1 code implementation • 19 Apr 2023 • Liang Zhang, Anwen Hu, Jing Zhang, Shuo Hu, Qin Jin
Taking into account the length of product manuals and the fact that a question is always related to a small number of pages, MPMQA can be naturally split into two subtasks: retrieving most related pages and then generating multimodal answers.
no code implementations • 13 Apr 2023 • Liang Zhang, Cheng Long
The constructed hypergraph would naturally capture the high-order relationships among roads with hyperedges.
no code implementations • 7 Apr 2023 • Pan Tan, Mingchen Li, Liang Zhang, Zhiqiang Hu, Liang Hong
We introduce TemPL, a novel deep learning approach for zero-shot prediction of protein stability and activity, harnessing temperature-guided language modeling.
1 code implementation • 20 Mar 2023 • Liang Zhang, Yutong Zhang, Jianming Deng, Chen Li
Reinforcement learning (RL) has emerged as a promising solution for addressing traffic signal control (TSC) challenges.
1 code implementation • 12 Mar 2023 • Ludan Ruan, Anwen Hu, Yuqing Song, Liang Zhang, Sipeng Zheng, Qin Jin
In this paper, we extend the stateof-the-art Vision-Language model CLIP to accommodate the audio modality for Vision-Language-Audio multimodal processing.
no code implementations • 10 Feb 2023 • Deyun Zhang, Shijia Geng, Yang Zhou, Weilun Xu, Guodong Wei, Kai Wang, Jie Yu, Qiang Zhu, Yongkui Li, Yonghong Zhao, Xingyue Chen, Rui Zhang, Zhaoji Fu, Rongbo Zhou, Yanqi E, Sumei Fan, Qinghao Zhao, Chuandong Cheng, Nan Peng, Liang Zhang, Linlin Zheng, Jianjun Chu, Hongbin Xu, Chen Tan, Jian Liu, Huayue Tao, Tong Liu, Kangyin Chen, Chenyang Jiang, Xingpeng Liu, Shenda Hong
In this study, we present an AI system developed to detect and screen cardiac abnormalities (CAs) from real-world ECG images.
no code implementations • CVPR 2023 • Mingtao Feng, Haoran Hou, Liang Zhang, Zijie Wu, Yulan Guo, Ajmal Mian
In-depth understanding of a 3D scene not only involves locating/recognizing individual objects, but also requires to infer the relationships and interactions among them.
1 code implementation • 26 Nov 2022 • Liang Zhang, Jinsong Su, Yidong Chen, Zhongjian Miao, Zijun Min, Qingguo Hu, Xiaodong Shi
Existing methods usually directly predict the relations of all entity pairs of input document in a one-pass manner, ignoring the fact that predictions of some entity pairs heavily depend on the predicted results of other pairs.
1 code implementation • 15 Nov 2022 • Liang Zhang, Cheng Long, Gao Cong
Motivated by the success of contrastive learning for representation learning, we propose to leverage it for multi-view region representation learning and design a model called ReMVC (Region Embedding with Multi-View Contrastive Learning) by following two guidelines: i) comparing a region with others within each view for effective representation extraction and ii) comparing a region with itself across different views for cross-view information sharing.
no code implementations • 12 Nov 2022 • Liang Zhang, Justin Lieffers, Adarsh Pyarelal
We contend that for an artificially intelligent agent to effectively model human teammates, i. e., demonstrate computational theory of mind (ToM), it should do the same.
no code implementations • 12 Nov 2022 • Liang Zhang, Justin Lieffers, Adarsh Pyarelal
Human decision-making often involves combining similar states into categories and reasoning at the level of the categories rather than the actual states.
1 code implementation • 2 Nov 2022 • Liang Zhang, Yutong Zhang, Shubin Xie, Jianming Deng, Chen Li
Reinforcement learning (RL) is gaining popularity as an effective approach for traffic signal control (TSC) and is increasingly applied in this domain.
no code implementations • 26 Oct 2022 • He Zhang, Sizhen Li, Liang Zhang, David H. Mathews, Liang Huang
Vienna RNAcofold, which reduces the problem into the classical single sequence folding by concatenating two strands, scales in cubic time against the combined sequence length, and is slow for long sequences.
no code implementations • 18 Jul 2022 • Hoang Le, Liang Zhang, Amir Said, Guillaume Sautiere, Yang Yang, Pranav Shrestha, Fei Yin, Reza Pourreza, Auke Wiggers
Realizing the potential of neural video codecs on mobile devices is a big technological challenge due to the computational complexity of deep networks and the power-constrained mobile hardware.
no code implementations • 14 Jul 2022 • Haoteng Tang, Guixiang Ma, Lei Guo, Xiyao Fu, Heng Huang, Liang Zhang
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks, which can be used for different prediction tasks.
no code implementations • 8 Jul 2022 • Zhaojia Huang, Liang Zhang, Tianhao Zhi
There is a rapid development and commercialization of new Energy Vehicles (NEV) in recent years.
1 code implementation • 29 Jun 2022 • Liang Zhang, Sizhen Li, He Zhang, David H. Mathews, Liang Huang
We present LinearAlifold, an efficient algorithm for folding aligned RNA homologs that scales linearly with both the sequence length and the number of sequences, based on our recent work LinearFold that folds a single RNA in linear time.
no code implementations • 7 Jun 2022 • Hongsheng Li, Guangming Zhu, Wu Zhen, Lan Ni, Peiyi Shen, Liang Zhang, Ning Wang, Cong Hua
However, there is still room for improvement in video HOI detection performance.
no code implementations • 1 Jun 2022 • Liang Zhang, Kiran Koshy Thekumparampil, Sewoong Oh, Niao He
We provide a general framework for solving differentially private stochastic minimax optimization (DP-SMO) problems, which enables the practitioners to bring their own base optimization algorithm and use it as a black-box to obtain the near-optimal privacy-loss trade-off.
no code implementations • 29 May 2022 • Liang Zhang, Anwen Hu, Qin Jin
Specifically, we design a lightweight language acquisition encoder based on state-of-the-art monolingual VLP models.
no code implementations • 28 May 2022 • Siqi Zhang, Yifan Hu, Liang Zhang, Niao He
We further study the algorithm-dependent generalization bounds via stability arguments of algorithms.
1 code implementation • 28 Apr 2022 • Mingtao Feng, Kendong Liu, Liang Zhang, Hongshan Yu, Yaonan Wang, Ajmal Mian
Saliency detection with light field images is becoming attractive given the abundant cues available, however, this comes at the expense of large-scale pixel level annotated data which is expensive to generate.
no code implementations • 26 Apr 2022 • Dario Rossi, Liang Zhang
The tremendous achievements of Artificial Intelligence (AI) in computer vision, natural language processing, games and robotics, has extended the reach of the AI hype to other fields: in telecommunication networks, the long term vision is to let AI fully manage, and autonomously drive, all aspects of network operation.
no code implementations • 21 Apr 2022 • Liang Zhang, Yidong Cheng
After that, we look on the entity-pair matrix as an image and then randomly mask it and restore it through an inference module to capture the correlations between the relationships.
Ranked #3 on Relation Extraction on GDA
no code implementations • 12 Apr 2022 • Yuan Sui, Fanyang Bu, Yingting Hu, Wei Yan, Liang Zhang
Nested named entity recognition (NER) aims to identify the entity boundaries and recognize categories of the named entities in a complex hierarchical sentence.
1 code implementation • 11 Apr 2022 • Biao Fu, PeiGen Ye, Liang Zhang, Pei Yu, Cong Hu, Yidong Chen, Xiaodong Shi
Sign Language Translation (SLT) is a promising technology to bridge the communication gap between the deaf and the hearing people.
Ranked #6 on Sign Language Translation on CSL-Daily
no code implementations • 7 Apr 2022 • Liang Zhang, Shubin Xie, Jianming Deng
We would like to withdraw this article for the following reasons: 1 this article is not satisfactory for limited language and theoretical description; 2 we have enriched and revised this article with the help of other authors; 3 we must update the author contribution information.
no code implementations • 1 Apr 2022 • Liang Zhang, Yidong Cheng
Document-level relation extraction (RE), which requires reasoning on multiple entities in different sentences to identify complex inter-sentence relations, is more challenging than sentence-level RE.
no code implementations • 26 Mar 2022 • Liang Zhang, Yidong Cheng
Specifically, the Dense-CCNet performs entity-pair-level logical reasoning through the Criss-Cross Attention (CCA), which can collect contextual information in horizontal and vertical directions on the entity-pair matrix to enhance the corresponding entity-pair representation.
Ranked #2 on Relation Extraction on CDR
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
no code implementations • 3 Jan 2022 • Guangming Zhu, Liang Zhang, Youliang Jiang, Yixuan Dang, Haoran Hou, Peiyi Shen, Mingtao Feng, Xia Zhao, Qiguang Miao, Syed Afaq Ali Shah, Mohammed Bennamoun
In this paper, we provide a comprehensive survey of recent achievements in this field brought about by deep learning techniques.
1 code implementation • CVPR 2022 • Mingtao Feng, Kendong Liu, Liang Zhang, Hongshan Yu, Yaonan Wang, Ajmal Mian
Saliency detection with light field images is becoming attractive given the abundant cues available, however, this comes at the expense of large-scale pixel level annotated data which is expensive to generate.
2 code implementations • 30 Dec 2021 • Liang Zhang, Shubin Xie, Jianming Deng
We propose two new methods: (1) Max Queue-Length (M-QL), an optimization-based traditional method designed based on the property of queue length; and (2) AttentionLight, an RL model that employs the self-attention mechanism to capture the signal phase correlation without requiring human knowledge of phase relationships.
1 code implementation • 19 Dec 2021 • Liang Zhang, Qiang Wu, Jun Shen, Linyuan Lü, Bo Du, Jianqing Wu
Many studies confirmed that a proper traffic state representation is more important than complex algorithms for the classical traffic signal control (TSC) problem.
no code implementations • IJCAI 2021 • Yumin Su, Liang Zhang, Quanyu Dai, Bo Zhang, Jinyao Yan, Dan Wang, Yongjun Bao, Sulong Xu, Yang He and Weipeng Yan
Conversion rate (CVR) prediction is becoming in- creasingly important in the multi-billion dollar on- line display advertising industry.
no code implementations • 7 Dec 2021 • Rui Wang, Chengtun Wu, Jiawen Xin, Liang Zhang
Instance object detection plays an important role in intelligent monitoring, visual navigation, human-computer interaction, intelligent services and other fields.
1 code implementation • 4 Dec 2021 • Qiang Wu, Liang Zhang, Jun Shen, Linyuan Lü, Bo Du, Jianqing Wu
Since conventional approaches could not adapt to dynamic traffic conditions, reinforcement learning (RL) has attracted more attention to help solve the traffic signal control (TSC) problem.
no code implementations • 15 Oct 2021 • Boxiang Liu, Yanjun Li, Liang Zhang
Human and animal tissues consist of heterogeneous cell types that organize and interact in highly structured manners.
no code implementations • 19 Aug 2021 • Ning Wang, Guangming Zhu, Liang Zhang, Peiyi Shen, Hongsheng Li, Cong Hua
With the effective spatio-temporal relationship modeling, it is possible not only to uncover contextual information in each frame but also to directly capture inter-time dependencies.
no code implementations • 16 Aug 2021 • Weiwei Guo, Xiaowei Liu, Sida Wang, Michaeel Kazi, Zhiwei Wang, Zhoutong Fu, Jun Jia, Liang Zhang, Huiji Gao, Bo Long
Building a successful search system requires a thorough understanding of textual data semantics, where deep learning based natural language processing techniques (deep NLP) can be of great help.
no code implementations • 30 Jul 2021 • Weiwei Guo, Xiaowei Liu, Sida Wang, Michaeel Kazi, Zhoutong Fu, Huiji Gao, Jun Jia, Liang Zhang, Bo Long
Many search systems work with large amounts of natural language data, e. g., search queries, user profiles and documents, where deep learning based natural language processing techniques (deep NLP) can be of great help.
1 code implementation • 24 Jun 2021 • Johann Li, Guangming Zhu, Cong Hua, Mingtao Feng, BasheerBennamoun, Ping Li, Xiaoyuan Lu, Juan Song, Peiyi Shen, Xu Xu, Lin Mei, Liang Zhang, Syed Afaq Ali Shah, Mohammed Bennamoun
Thus, as comprehensive as possible, this paper provides a collection of medical image datasets with their associated challenges for deep learning research.
no code implementations • 14 Jun 2021 • Xu Han, Zhengyan Zhang, Ning Ding, Yuxian Gu, Xiao Liu, Yuqi Huo, Jiezhong Qiu, Yuan YAO, Ao Zhang, Liang Zhang, Wentao Han, Minlie Huang, Qin Jin, Yanyan Lan, Yang Liu, Zhiyuan Liu, Zhiwu Lu, Xipeng Qiu, Ruihua Song, Jie Tang, Ji-Rong Wen, Jinhui Yuan, Wayne Xin Zhao, Jun Zhu
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI).
no code implementations • NeurIPS 2021 • Feiping Nie, Shenfei Pei, Rong Wang, Liang Zhang, Jun Wu, Qinglong Chang, Xuelong Li
We also developed a general model that unified LKM, KSUMS, and SC, and discussed the connection among them.
no code implementations • 22 Apr 2021 • Yang An, Liang Zhang, Mao You, Xueqing Tian, Bo Jin, Xiaopeng Wei
Second, we incorporate a novel interactive long-short term memory network (InLSTM) to reinforce the interactions of multilevel medical sequences in EHR data with the help of the calibrated memory-augmented cell and an enhanced input gate.
1 code implementation • ICCV 2021 • Mingtao Feng, Zhen Li, Qi Li, Liang Zhang, Xiangdong Zhang, Guangming Zhu, HUI ZHANG, Yaonan Wang, Ajmal Mian
There are three main challenges in 3D object grounding: to find the main focus in the complex and diverse description; to understand the point cloud scene; and to locate the target object.
2 code implementations • 11 Mar 2021 • Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Danyang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, ShiZhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, Ji-Rong Wen
We further construct a large Chinese multi-source image-text dataset called RUC-CAS-WenLan for pre-training our BriVL model.
Ranked #1 on Image Retrieval on RUC-CAS-WenLan
no code implementations • ICCV 2021 • Jianping Wu, Liang Zhang, Ye Liu, Ke Chen
We propose a novel approach that integrates under-parameterized RANSAC (UPRANSAC) with Hough Transform to detect vanishing points (VPs) from un-calibrated monocular images.
no code implementations • 18 Dec 2020 • Federico García, Mariano Méndez, Konstantinos Karpouzas, Tomaso Belloni, Liang Zhang, Diego Altamirano
The data show a strong type-B QPO at ~4. 5 Hz with increasing fractional rms amplitude with energy and positive lags with respect to a reference band at 2-2. 5 keV.
High Energy Astrophysical Phenomena
no code implementations • 17 Dec 2020 • Sirjan Kafle, Aman Gupta, Xue Xia, Ananth Sankar, Xi Chen, Di Wen, Liang Zhang
SGMM represents each video by the parameters of a Gaussian mixture model (GMM) trained for that video.
no code implementations • 16 Dec 2020 • Dingwei Li, Qinglong Chang, Lixue Pang, Yanfang Zhang, Xudong Sun, Jikun Ding, Liang Zhang
Although many achievements have been made since Google threw out the paradigm of federated learning (FL), there still exists much room for researchers to optimize its efficiency.
no code implementations • NeurIPS 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
no code implementations • 19 Nov 2020 • Suihanjin Yu, Youmin Zhang, Chen Wang, Xiao Bai, Liang Zhang, Edwin R. Hancock
To address this problem, we introduce a lightweight but effective Global Matching Component (GMC) to grab global matching features.
1 code implementation • 12 Oct 2020 • Gaojie Jin, Xinping Yi, Liang Zhang, Lijun Zhang, Sven Schewe, Xiaowei Huang
This paper studies the novel concept of weight correlation in deep neural networks and discusses its impact on the networks' generalisation ability.
1 code implementation • 6 Aug 2020 • Weiwei Guo, Xiao-Wei Liu, Sida Wang, Huiji Gao, Ananth Sankar, Zimeng Yang, Qi Guo, Liang Zhang, Bo Long, Bee-Chung Chen, Deepak Agarwal
Ranking is the most important component in a search system.
no code implementations • 12 Jul 2020 • Liang Zhang, Johann Li, Ping Li, Xiaoyuan Lu, Peiyi Shen, Guangming Zhu, Syed Afaq Shah, Mohammed Bennarmoun, Kun Qian, Björn W. Schuller
To the best of our knowledge, MeDaS is the first open-source platform proving a collaborative and interactive service for researchers from a medical background easily using DL related toolkits, and at the same time for scientists or engineers from information sciences to understand the medical knowledge side.
no code implementations • CVPR 2020 • Zhaoyi Wan, Jielei Zhang, Liang Zhang, Jiebo Luo, Cong Yao
This remedy alleviates the problem of vocabulary reliance and improves the overall scene text recognition performance.
no code implementations • 23 Apr 2020 • Qingsen Yan, Bo wang, Dong Gong, Chuan Luo, Wei Zhao, Jianhu Shen, Qinfeng Shi, Shuo Jin, Liang Zhang, Zheng You
Inspired by the observation that the boundary of the infected lung can be enhanced by adjusting the global intensity, in the proposed deep CNN, we introduce a feature variation block which adaptively adjusts the global properties of the features for segmenting COVID-19 infection.
2 code implementations • 21 Apr 2020 • He Zhang, Liang Zhang, Ang Lin, Congcong Xu, Ziyu Li, Kaibo Liu, Boxiang Liu, Xiaopin Ma, Fanfan Zhao, Weiguo Yao, Hangwen Li, David H. Mathews, Yujian Zhang, Liang Huang
Messenger RNA (mRNA) vaccines are being used for COVID-19, but still suffer from the critical issue of mRNA instability and degradation, which is a major obstacle in the storage, distribution, and efficacy of the vaccine.
no code implementations • 22 Feb 2020 • Sripad Krishna Devalla, Tan Hung Pham, Satish Kumar Panda, Liang Zhang, Giridhar Subramanian, Anirudh Swaminathan, Chin Zhi Yun, Mohan Rajan, Sujatha Mohan, Ramaswami Krishnadas, Vijayalakshmi Senthil, John Mark S. de Leon, Tin A. Tun, Ching-Yu Cheng, Leopold Schmetterer, Shamira Perera, Tin Aung, Alexandre H. Thiery, Michael J. A. Girard
Since the introduction of optical coherence tomography (OCT), it has been possible to study the complex 3D morphological changes of the optic nerve head (ONH) tissues that occur along with the progression of glaucoma.
no code implementations • 10 Feb 2020 • Yingdong Hu, Liang Zhang, Wei Shan, Xiaoxiao Qin, Jing Qi, Zhenzhou Wu, Yang Yuan
In the big data era, many organizations face the dilemma of data sharing.
no code implementations • 30 Jan 2020 • Liang Zhang, Yufei Liu, Hang Xiao, Lu Yang, Guangming Zhu, Syed Afaq Shah, Mohammed Bennamoun, Peiyi Shen
Scene text detection has received attention for years and achieved an impressive performance across various benchmarks.
1 code implementation • 30 Jan 2020 • Liang Zhang, Xudong Wang, Hongsheng Li, Guangming Zhu, Peiyi Shen, Ping Li, Xiaoyuan Lu, Syed Afaq Ali Shah, Mohammed Bennamoun
To solve these problems mentioned above, we propose a novel graph self-adaptive pooling method with the following objectives: (1) to construct a reasonable pooled graph topology, structure and feature information of the graph are considered simultaneously, which provide additional veracity and objectivity in node selection; and (2) to make the pooled nodes contain sufficiently effective graph information, node feature information is aggregated before discarding the unimportant nodes; thus, the selected nodes contain information from neighbor nodes, which can enhance the use of features of the unselected nodes.
no code implementations • 30 Nov 2019 • Mingtao Feng, Syed Zulqarnain Gilani, Yaonan Wang, Liang Zhang, Ajmal Mian
Convolutional Neural Networks (CNNs) have emerged as a powerful strategy for most object detection tasks on 2D images.
no code implementations • 11 Nov 2019 • Liang Zhang, Guannan Liu, Junjie Wu
Given the effectiveness and ease of use, Item-based Collaborative Filtering (ICF) methods have been broadly used in industry in recent years.
no code implementations • 3 Nov 2019 • Yikai Wang, Liang Zhang, Quanyu Dai, Fuchun Sun, Bo Zhang, Yang He, Weipeng Yan, Yongjun Bao
In deep CTR models, exploiting users' historical data is essential for learning users' behaviors and interests.
no code implementations • 27 Sep 2019 • Mingtao Feng, Liang Zhang, Xuefei Lin, Syed Zulqarnain Gilani, Ajmal Mian
We propose a point attention network that learns rich local shape features and their contextual correlations for 3D point cloud semantic segmentation.
1 code implementation • 30 Aug 2019 • Quanyu Dai, Xiao Shen, Liang Zhang, Qiang Li, Dan Wang
To improve this strategy, we further propose an interpretable adversarial training method by enforcing the reconstruction of the adversarial examples in the discrete graph domain.
no code implementations • 16 Aug 2019 • Guannan Liu, Liang Zhang, Junjie Wu, Xiao Fang
Specifically, eRAN first maps items connected in attribute networks to low-dimensional embedding vectors through a deep autoencoder, and then an attention mechanism is applied to model the attractions of attributes to users, from which personalized item representation can be derived.
no code implementations • 22 Mar 2019 • Dongyang Zhao, Liang Zhang, Bo Zhang, Lizhou Zheng, Yongjun Bao, Weipeng Yan
To tackle this challenge, we propose a deep hierarchical reinforcement learning based recommendation framework, which consists of two components, i. e., high-level agent and low-level agent.
Hierarchical Reinforcement Learning Recommendation Systems +2
1 code implementation • 26 Feb 2019 • Ziyao Li, Liang Zhang, Guojie Song
Graph Convolutional Networks (GCNs) have proved to be a most powerful architecture in aggregating local neighborhood information for individual graph nodes.
1 code implementation • NeurIPS 2018 • Liang Zhang, Guangming Zhu, Lin Mei, Peiyi Shen, Syed Afaq Ali Shah, Mohammed Bennamoun
On this basis, a new variant of LSTM is derived, in which the convolutional structures are only embedded into the input-to-state transition of LSTM.
3 code implementations • 15 Nov 2018 • Liang Zhang, Gang Wang, Georgios B. Giannakis
To bypass these hurdles, this paper advocates deep neural networks (DNNs) for real-time power system monitoring.
no code implementations • 14 Nov 2018 • Ziyao Li, Liang Zhang, Guojie Song
We further propose SepNE, a simple and flexible network embedding algorithm which independently learns representations for different subsets of nodes in separated processes.
no code implementations • 7 May 2018 • Xiangyu Zhao, Long Xia, Liang Zhang, Zhuoye Ding, Dawei Yin, Jiliang Tang
In particular, we propose a principled approach to jointly generate a set of complementary items and the corresponding strategy to display them in a 2-D page; and propose a novel page-wise recommendation framework based on deep reinforcement learning, DeepPage, which can optimize a page of items with proper display based on real-time feedback from users.
no code implementations • 26 Apr 2018 • Xiang Yan, Syed Zulqarnain Gilani, Hanlin Qin, Mingtao Feng, Liang Zhang, Ajmal Mian
Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background.
no code implementations • 19 Feb 2018 • Xiangyu Zhao, Liang Zhang, Zhuoye Ding, Long Xia, Jiliang Tang, Dawei Yin
Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations.
no code implementations • 13 Jan 2018 • Sheng-Kai Liao, Wen-Qi Cai, Johannes Handsteiner, Bo Liu, Juan Yin, Liang Zhang, Dominik Rauch, Matthias Fink, Ji-Gang Ren, Wei-Yue Liu, Yang Li, Qi Shen, Yuan Cao, Feng-Zhi Li, Jian-Feng Wang, Yong-Mei Huang, Lei Deng, Tao Xi, Lu Ma, Tai Hu, Li Li, Nai-Le Liu, Franz Koidl, Peiyuan Wang, Yu-Ao Chen, Xiang-Bin Wang, Michael Steindorfer, Georg Kirchner, Chao-Yang Lu, Rong Shu, Rupert Ursin, Thomas Scheidl, Cheng-Zhi Peng, Jian-Yu Wang, Anton Zeilinger, Jian-Wei Pan
This was on the one hand the transmission of images in a one-time pad configuration from China to Austria as well as from Austria to China.
Quantum Physics
7 code implementations • 30 Dec 2017 • Xiangyu Zhao, Liang Zhang, Long Xia, Zhuoye Ding, Dawei Yin, Jiliang Tang
Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services.
no code implementations • 29 Mar 2017 • Krishnaram Kenthapadi, Stuart Ambler, Liang Zhang, Deepak Agarwal
The recently launched LinkedIn Salary product has been designed with the goal of providing compensation insights to the world's professionals and thereby helping them optimize their earning potential.
no code implementations • 27 Dec 2016 • Liang Zhang, Gang Wang, Daniel Romero, Georgios B. Giannakis
To circumvent the limitations of existing methods, the present work develops step sizes for RB-FW that enable a flexible selection of the number of blocks to update per iteration while ensuring convergence and feasibility of the iterates.
no code implementations • COLING 2016 • Yang Li, Ting Liu, Jing Jiang, Liang Zhang
Microblogging services allow users to create hashtags to categorize their posts.
1 code implementation • 23 Nov 2016 • Gang Wang, Liang Zhang, Georgios B. Giannakis, Mehmet Akcakaya, Jie Chen
Upon formulating sparse PR as an amplitude-based nonconvex optimization task, SPARTA works iteratively in two stages: In stage one, the support of the underlying sparse signal is recovered using an analytically well-justified rule, and subsequently, a sparse orthogonality-promoting initialization is obtained via power iterations restricted on the support; and, in the second stage, the initialization is successively refined by means of hard thresholding based gradient-type iterations.
Information Theory Information Theory Optimization and Control
no code implementations • NeurIPS 2008 • Liang Zhang, Deepak Agarwal
Multi-level hierarchical models provide an attractive framework for incorporating correlations induced in a response variable organized in a hierarchy.