1 code implementation • Findings (ACL) 2022 • Dawei Li, Yanran Li, Jiayi Zhang, Ke Li, Chen Wei, Jianwei Cui, Bin Wang
Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps.
no code implementations • ECCV 2020 • Shuchen Weng, Wenbo Li, Dawei Li, Hongxia Jin, Boxin Shi
We study conditional image repainting where a model is trained to generate visual content conditioned on user inputs, and composite the generated content seamlessly onto a user provided image while preserving the semantics of users' inputs.
1 code implementation • 8 May 2024 • Dawei Li, Shu Yang, Zhen Tan, Jae Young Baik, Sunkwon Yun, Joseph Lee, Aaron Chacko, BoJian Hou, Duy Duong-Tran, Ying Ding, Huan Liu, Li Shen, Tianlong Chen
With a synergized framework of LLM and KG mutually enhancing each other, we first leverage LLM to construct an evolving AD-specific knowledge graph (KG) sourced from AD-related scientific literature, and then we utilize a coarse-to-fine sampling method with a novel self-aware knowledge retrieval approach to select appropriate knowledge from the KG to augment LLM inference capabilities.
no code implementations • 7 May 2024 • Yongqi Tong, Sizhe Wang, Dawei Li, Yifan Wang, Simeng Han, Zi Lin, Chengsong Huang, Jiaxin Huang, Jingbo Shang
Therefore, we present \textsc{PuzzleBen}, a weakly supervised benchmark that comprises 25, 147 complex questions, answers, and human-generated rationales across various domains, such as brainteasers, puzzles, riddles, parajumbles, and critical reasoning tasks.
1 code implementation • 16 Apr 2024 • Hengyuan Zhang, Yanru Wu, Dawei Li, Zacc Yang, Rui Zhao, Yong Jiang, Fei Tan
In an overall evaluation of both speciality and versatility, CoFiTune consistently outperforms baseline methods across diverse tasks and model scales.
1 code implementation • 2 Apr 2024 • Dawei Li, William Hogan, Jingbo Shang
This strategy enables a larger attack budget for entities and coaxes the model to leverage relational patterns embedded in the context.
no code implementations • 29 Mar 2024 • Yongqi Tong, Dawei Li, Sizhe Wang, Yujia Wang, Fei Teng, Jingbo Shang
We conduct a series of experiments to prove LLMs can obtain benefits from mistakes in both directions.
no code implementations • 12 Mar 2024 • Hengyuan Zhang, Zitao Liu, Chenming Shang, Dawei Li, Yong Jiang
However, the inherent black-box nature of deep learning techniques often poses a hurdle for teachers to fully embrace the model's prediction results.
1 code implementation • 28 Jan 2024 • Dawei Li, Zhen Tan, Tianlong Chen, Huan Liu
While textual information significantly enhances the performance of pre-trained language models (PLMs) in knowledge graph completion (KGC), the static and noisy nature of existing corpora collected from Wikipedia articles or synsets definitions often limits the potential of PLM-based KGC models.
no code implementations • 6 Nov 2023 • Dawei Li, Yaxuan Li, Dheeraj Mekala, Shuyao Li, Yulin Wang, Xueqi Wang, William Hogan, Jingbo Shang
DAIL leverages the intuition that large language models are more familiar with the content generated by themselves.
1 code implementation • 20 Oct 2023 • Dawei Li, Hengyuan Zhang, Yanran Li, Shiping Yang
In this work, we tackle the scenario of understanding characters in scripts, which aims to learn the characters' personalities and identities from their utterances.
no code implementations • 18 Oct 2023 • Yongqi Tong, Yifan Wang, Dawei Li, Sizhe Wang, Zi Lin, Simeng Han, Jingbo Shang
Chain-of-Thought(CoT) prompting and its variants explore equipping large language models (LLMs) with high-level reasoning abilities by emulating human-like linear cognition and logic.
no code implementations • 21 Jun 2023 • Mingjie Pan, Yulu Gan, Fangxu Zhou, Jiaming Liu, Aimin Wang, Shanghang Zhang, Dawei Li
Since the diffusion model learns the universal structural distribution of biological tissues, which is independent of the axial resolution, DiffuseIR can reconstruct authentic images with unseen low-axial resolutions into a high-axial resolution without requiring re-training.
no code implementations • 9 Jun 2023 • Hengyuan Zhang, Dawei Li, Yanran Li, Chenming Shang, Chufan Shi, Yong Jiang
The standard definition generation task requires to automatically produce mono-lingual definitions (e. g., English definitions for English words), but ignores that the generated definitions may also consist of unfamiliar words for language learners.
no code implementations • 9 May 2023 • Zhenge Jia, Dawei Li, Cong Liu, Liqi Liao, Xiaowei Xu, Lichuan Ping, Yiyu Shi
This paper concludes with the direction of improvement for the future TinyML design for health monitoring applications.
1 code implementation • 6 Apr 2023 • Yite Wang, Dawei Li, Ruoyu Sun
Recent advances in neural tangent kernel (NTK) theory suggest that the training dynamics of large enough neural networks is closely related to the spectrum of the NTK.
1 code implementation • International Conference on Learning Representations 2023 • Kailang Ma, Yu Sun, Jian Cui, Dawei Li, Zhenyu Guan and Jianwei Liu
Furthermore, we demonstrate that our method facilitates the existing gradient inversion attacks by exploiting the recovered labels, with an increase of 6-7 in PSNR on both MNIST and CIFAR100.
1 code implementation • 2 Oct 2022 • Hengyuan Zhang, Dawei Li, Shiping Yang, Yanran Li
Recently, pre-trained transformer-based models have achieved great success in the task of definition generation (DG).
no code implementations • 11 Aug 2022 • Chenlong Zhang, Dawei Li, Haodong Li
Fixed-wing Miniature Air Vehicle (MAV) is not only coupled with longitudinal motion, but also more susceptible to wind disturbance due to its lighter weight, which brings more challenges to its altitude and airspeed controller design.
1 code implementation • 6 Apr 2022 • Dawei Li, Yanran Li, Jiayi Zhang, Ke Li, Chen Wei, Jianwei Cui, Bin Wang
Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps.
no code implementations • 17 Mar 2021 • Boxiang Dong, Hui, Wang, Aparna S. Varde, Dawei Li, Bharath K. Samanthula, Weifeng Sun, Liang Zhao
To achieve high detection accuracy on imbalanced data, we design a novel attack-sharing loss function that can effectively move the decision boundary towards the attack classes and eliminates the bias towards the majority/benign class.
no code implementations • 1 Jan 2021 • Dawei Li, Ruoyu Sun
The Barzilai-Borwein (BB) method has demonstrated great empirical success in nonlinear optimization.
no code implementations • ICLR 2021 • Naichen Shi, Dawei Li, Mingyi Hong, Ruoyu Sun
Removing this assumption allows us to establish a phase transition from divergence to non-divergence for RMSProp.
1 code implementation • 14 Dec 2020 • Yuxuan Zhang, Chen Gong, Dawei Li, Zhi-Wei Wang, Shengda D Pu, Alex W Robertson, Hong Yu, John Parrington
A reasonable prediction of infectious diseases transmission process under different disease control strategies is an important reference point for policy makers.
no code implementations • 2 Jul 2020 • Ruoyu Sun, Dawei Li, Shiyu Liang, Tian Ding, R. Srikant
Second, we discuss a few rigorous results on the geometric properties of wide networks such as "no bad basin", and some modifications that eliminate sub-optimal local minima and/or decreasing paths to infinity.
no code implementations • 4 Nov 2019 • Tian Ding, Dawei Li, Ruoyu Sun
More specifically, we prove that for any multi-layer network with generic input data and non-linear activation functions, sub-optimal local minima can exist, no matter how wide the network is (as long as the last hidden layer has at least two neurons).
no code implementations • 24 Aug 2019 • Pengfei Guo, Dawei Li, Xingde Li
We added customized skip connections between the compression CNNs and the reconstruction CNNs to reserve the detail information and trained the two nets together with the semantic segmented image patches from data preprocessing module.
1 code implementation • 21 Aug 2019 • Yuan Dong, Dawei Li, Chi Zhang, Chuhan Wu, Hong Wang, Ming Xin, Jianlin Cheng, Jian Lin
A significant novelty of the proposed RGAN is that it combines the supervised and regressional convolutional neural network (CNN) with the traditional unsupervised GAN, thus overcoming the common technical barrier in the traditional GANs, which cannot generate data associated with given continuous quantitative labels.
Computational Physics Materials Science Applied Physics
no code implementations • 12 Aug 2019 • Dawei Li, Yan Cao, Guoliang Shi, Xin Cai, Yang Chen, Sifan Wang, Siyuan Yan
The proposed method can also facilitate the automatic traits estimation of each single leaf (such as the leaf area, length, and width), which has potential to become a highly effective tool for plant research and agricultural engineering.
no code implementations • 2 Jul 2019 • Dawei Li, Siyuan Yan, Xin Cai, Yan Cao, Sifan Wang
In this paper, we present an integrated filter which comprises a weighted local guided image filter and a weighted spatiotemporal tree filter.
no code implementations • SEMEVAL 2019 • Dawei Li, Jin Wang, Xue-jie Zhang
This paper describes our approach to the sentiment analysis of Twitter textual conversations based on deep learning.
no code implementations • 26 Mar 2019 • Dawei Li, Serafettin Tasci, Shalini Ghosh, Jingwen Zhu, Junting Zhang, Larry Heck
The key component of RILOD is a novel incremental learning algorithm that trains end-to-end for one-stage deep object detection models only using training data of new object classes.
no code implementations • 20 Mar 2019 • Jie Zhang, Junting Zhang, Shalini Ghosh, Dawei Li, Jingwen Zhu, Heming Zhang, Yalin Wang
Lifelong learning, the problem of continual learning where tasks arrive in sequence, has been lately attracting more attention in the computer vision community.
2 code implementations • 19 Mar 2019 • Junting Zhang, Jie Zhang, Shalini Ghosh, Dawei Li, Serafettin Tasci, Larry Heck, Heming Zhang, C. -C. Jay Kuo
The idea is to first train a separate model only for the new classes, and then combine the two individual models trained on data of two distinct set of classes (old classes and new classes) via a novel double distillation training objective.
no code implementations • 3 Feb 2019 • Jie Zhang, Xiaolong Wang, Dawei Li, Shalini Ghosh, Abhishek Kolagunda, Yalin Wang
State-of-the-art deep model compression methods exploit the low-rank approximation and sparsity pruning to remove redundant parameters from a learned hidden layer.
no code implementations • 28 Dec 2018 • Dawei Li, Tian Ding, Ruoyu Sun
Wide networks are often believed to have a nice optimization landscape, but what rigorous results can we prove?
no code implementations • 4 Jun 2018 • Jie Zhang, Xiaolong Wang, Dawei Li, Yalin Wang
Recurrent neural networks (RNNs) achieve cutting-edge performance on a variety of problems.
1 code implementation • 13 Apr 2018 • Yang Song, Jingwen Zhu, Dawei Li, Xiaolong Wang, Hairong Qi
Given an arbitrary face image and an arbitrary speech clip, the proposed work attempts to generating the talking face video with accurate lip synchronization while maintaining smooth transition of both lip and facial movement over the entire video clip.
no code implementations • 13 Mar 2018 • Kai Xu, Dawei Li, Nick Cassimatis, Xiaolong Wang
In this paper, we propose LCANet, an end-to-end deep neural network based lipreading system.
Ranked #2 on Lipreading on GRID corpus (mixed-speech)
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 22 Dec 2017 • Xianzhi Du, Xiaolong Wang, Dawei Li, Jingwen Zhu, Serafettin Tasci, Cameron Upright, Stephen Walsh, Larry Davis
Compared to the general semantic segmentation problem, portrait segmentation has higher precision requirement on boundary area.
no code implementations • 16 Aug 2017 • Dawei Li, Xiaolong Wang, Deguang Kong
As observed in the experiment, DeepRebirth achieves more than 3x speed-up and 2. 5x run-time memory saving on GoogLeNet with only 0. 4% drop of top-5 accuracy on ImageNet.