no code implementations • 9 May 2024 • Binxiao Huang, Jason Chun Lok, Chang Liu, Ngai Wong
To exploit the abundant information contained in the input data to output label mapping, our scheme utilizes the network trained from the clean dataset as a trigger generator to produce poisons that significantly raise the success rate of backdoor attacks versus conventional approaches.
no code implementations • 3 May 2024 • Xincheng Feng, Guodong Shen, Jianhao Hu, Meng Li, Ngai Wong
Nonlinearities are crucial for capturing complex input-output relationships especially in deep neural networks.
no code implementations • 3 Apr 2024 • Taiqiang Wu, Chaofan Tao, Jiahao Wang, Zhe Zhao, Ngai Wong
Kullback-Leiber divergence has been widely used in Knowledge Distillation (KD) to compress Large Language Models (LLMs).
1 code implementation • 28 Mar 2024 • Sidi Yang, Binxiao Huang, Mingdeng Cao, Yatai Ji, Hanzhong Guo, Ngai Wong, Yujiu Yang
Existing enhancement models often optimize for high performance while falling short of reducing hardware inference time and power consumption, especially on edge devices with constrained computing and storage resources.
no code implementations • 21 Feb 2024 • Ziyi Guan, Hantao Huang, Yupeng Su, Hong Huang, Ngai Wong, Hao Yu
Large Language Models (LLMs) have greatly advanced the natural language processing paradigm.
1 code implementation • 18 Feb 2024 • Yifan Yang, Jiajun Zhou, Ngai Wong, Zheng Zhang
Various parameter-efficient fine-tuning (PEFT) techniques have been proposed to enable computationally efficient fine-tuning while maintaining model performance.
no code implementations • 28 Dec 2023 • Jason Chun Lok Li, Chang Liu, Binxiao Huang, Ngai Wong
Existing approaches to Implicit Neural Representation (INR) can be interpreted as a global scene representation via a linear combination of Fourier bases of different frequencies.
no code implementations • 15 Dec 2023 • Jason Chun Lok Li, Rui Lin, Jiajun Zhou, Edmund Yin Mun Lam, Ngai Wong
Despite the decomposition of convolutional kernels for lightweight CNNs being well studied, existing works that rely on tensor network diagrams or hyperdimensional abstraction lack geometry intuition.
1 code implementation • 11 Dec 2023 • Binxiao Huang, Jason Chun Lok Li, Jie Ran, Boyu Li, Jiajun Zhou, Dahai Yu, Ngai Wong
Conventional super-resolution (SR) schemes make heavy use of convolutional neural networks (CNNs), which involve intensive multiply-accumulate (MAC) operations, and require specialized hardware such as graphics processing units.
no code implementations • 14 Nov 2023 • Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Binxiao Huang, Jie Ran, Ngai Wong
Most deep neural networks (DNNs) consist fundamentally of convolutional and/or fully connected layers, wherein the linear transform can be cast as the product between a filter matrix and a data matrix obtained by arranging feature tensors into columns.
no code implementations • 25 Jun 2023 • Binxiao Huang, Rui Lin, Chaofan Tao, Ngai Wong
Deep neural networks (DNNs) are incredibly vulnerable to crafted, imperceptible adversarial perturbations.
no code implementations • 19 Jun 2023 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu, H. Vincent Poor
A probabilistic model is built to induce the common sparsity in the spatial domain, and the first-order Taylor expansion is adopted to get rid of the grid mismatch in the dictionaries.
1 code implementation • 19 Jun 2023 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu
Tensor train (TT) representation has achieved tremendous success in visual data completion tasks, especially when it is combined with tensor folding.
1 code implementation • 16 May 2023 • Taiqiang Wu, Cheng Hou, Shanshan Lao, Jiayi Li, Ngai Wong, Zhe Zhao, Yujiu Yang
Knowledge Distillation (KD) is a predominant approach for BERT compression.
no code implementations • 27 Mar 2023 • Xiaoyan Qian, Chang Liu, Xiaojuan Qi, Siew-Chong Tan, Edmund Lam, Ngai Wong
3D automatic annotation has received increased attention since manually annotating 3D point clouds is laborious.
no code implementations • 24 Mar 2023 • Taiqiang Wu, Zhe Zhao, Jiahao Wang, Xingyu Bai, Lei Wang, Ngai Wong, Yujiu Yang
Distilling high-accuracy Graph Neural Networks~(GNNs) to low-latency multilayer perceptrons~(MLPs) on graph tasks has become a hot research topic.
no code implementations • 24 Feb 2023 • Jiajun Zhou, Jiajun Wu, Yizhao Gao, Yuhao Ding, Chaofan Tao, Boyu Li, Fengbin Tu, Kwang-Ting Cheng, Hayden Kwok-Hay So, Ngai Wong
To accelerate the inference of deep neural networks (DNNs), quantization with low-bitwidth numbers is actively researched.
no code implementations • 24 Dec 2022 • Binxiao Huang, Chaofan Tao, Rui Lin, Ngai Wong
Deep neural networks are incredibly vulnerable to crafted, human-imperceptible adversarial perturbations.
no code implementations • 17 Oct 2022 • Chaofan Tao, Ngai Wong
To our best knowledge, this work is the first work that trains both quantized and binary neural networks on ImageNet that consistently improve robustness under different attacks.
no code implementations • 13 Aug 2022 • Jie Ran, Rui Lin, Jason Chun Lok Li, Jiajun Zhou, Ngai Wong
A novel deep neural network (DNN) architecture is proposed wherein the filtering and linear transform are realized solely with product quantization (PQ).
1 code implementation • 20 Jul 2022 • Chang Liu, Xiaoyan Qian, Binxiao Huang, Xiaojuan Qi, Edmund Lam, Siew-Chong Tan, Ngai Wong
By enriching the sparse point clouds, our method achieves 4. 48\% and 4. 03\% better 3D AP on KITTI moderate and hard samples, respectively, versus the state-of-the-art autolabeler.
no code implementations • 17 May 2022 • Tianshu Hou, Peining Zhen, Ngai Wong, Quan Chen, Guoyong Shi, Shuqi Wang, Hai-Bao Chen
Electromigration (EM) is one of the major concerns in the reliability analysis of very large scale integration (VLSI) systems due to the continuous technology scaling.
no code implementations • 29 Mar 2022 • Tianshu Hou, Ngai Wong, Quan Chen, Zhigang Ji, Hai-Bao Chen
The electromigration (EM)-induced reliability issues in very large scale integration (VLSI) circuits have attracted increased attention due to the continuous technology scaling.
1 code implementation • 29 Mar 2022 • Rui Lin, Cong Chen, Ngai Wong
Existing low-rank tensor completion (LRTC) approaches aim at restoring a partially observed tensor by imposing a global low-rank constraint on the underlying completed tensor.
no code implementations • 29 Mar 2022 • Chang Liu, Xiaoyan Qian, Xiaojuan Qi, Edmund Y. Lam, Siew-Chong Tan, Ngai Wong
While a few previous studies tried to automatically generate 3D bounding boxes from weak labels such as 2D boxes, the quality is sub-optimal compared to human annotators.
1 code implementation • NeurIPS 2021 • Rui Lin, Jie Ran, King Hung Chiu, Graziano Chesi, Ngai Wong
We introduce a new kind of linear transform named Deformable Butterfly (DeBut) that generalizes the conventional butterfly matrices and can be adapted to various input-output dimensions.
no code implementations • ACL 2022 • Chaofan Tao, Lu Hou, Wei zhang, Lifeng Shang, Xin Jiang, Qun Liu, Ping Luo, Ngai Wong
We find that previous quantization methods fail on generative tasks due to the \textit{homogeneous word embeddings} caused by reduced capacity, and \textit{varied distribution of weights}.
no code implementations • 16 Mar 2022 • Binxiao Huang, Chaofan Tao, Rui Lin, Ngai Wong
We are hopeful this work can shed light on the design of more robust neural networks.
no code implementations • 10 May 2021 • Jie Ran, Rui Lin, Hayden K. H. So, Graziano Chesi, Ngai Wong
Elasticities in depth, width, kernel size and resolution have been explored in compressing deep neural networks (DNNs).
1 code implementation • 8 May 2021 • Rui Lin, Jie Ran, Dongpeng Wang, King Hung Chiu, Ngai Wong
Recent results have revealed an interesting observation in a trained convolutional neural network (CNN), namely, the rank of a feature map channel matrix remains surprisingly constant despite the input images.
no code implementations • 22 Mar 2021 • Chang Liu, Xiaojuan Qi, Edmund Lam, Ngai Wong
The neuromorphic event cameras, which capture the optical changes of a scene, have drawn increasing attention due to their high speed and low power consumption.
1 code implementation • 15 Feb 2021 • Chaofan Tao, Rui Lin, Quan Chen, Zhaoyang Zhang, Ping Luo, Ngai Wong
Prior arts often discretize the network weights by carefully tuning hyper-parameters of quantization (e. g. non-uniform stepsize and layer-wise bitwidths), which are complicated and sub-optimal because the full-precision and low-precision models have a large discrepancy.
no code implementations • 4 Nov 2020 • Yuan Cheng, Yuchao Yang, Hai-Bao Chen, Ngai Wong, Hao Yu
Real-time understanding in video is crucial in various AI applications such as autonomous driving.
no code implementations • 13 Oct 2020 • Le Xu, Lei Cheng, Ngai Wong, Yik-Chung Wu
Tensor train (TT) decomposition, a powerful tool for analyzing multidimensional data, exhibits superior performance in many machine learning tasks.
no code implementations • 28 Feb 2020 • Rui Lin, Ching-Yun Ko, Zhuolun He, Cong Chen, Yuan Cheng, Hao Yu, Graziano Chesi, Ngai Wong
The emerging edge computing has promoted immense interests in compacting a neural network without sacrificing much accuracy.
no code implementations • 2 Jan 2020 • Cong Chen, Kim Batselier, Wenjian Yu, Ngai Wong
In this paper, we propose a tensor train (TT)-based kernel technique for the first time, and apply it to the conventional support vector machine (SVM) for image classification.
1 code implementation • 2 Dec 2019 • Zhaoyang Lyu, Ching-Yun Ko, Zhifeng Kong, Ngai Wong, Dahua Lin, Luca Daniel
We draw inspiration from such work and further demonstrate the optimality of deterministic CROWN (Zhang et al. 2018) solutions in a given linear programming problem under mild constraints.
no code implementations • 17 May 2019 • Ching-Yun Ko, Rui Lin, Shu Li, Ngai Wong
Popular crowdsourcing techniques mostly focus on evaluating workers' labeling quality before adjusting their weights during label aggregation.
2 code implementations • 17 May 2019 • Ching-Yun Ko, Zhaoyang Lyu, Tsui-Wei Weng, Luca Daniel, Ngai Wong, Dahua Lin
The vulnerability to adversarial attacks has been a critical issue for deep neural networks.
no code implementations • 12 Nov 2018 • Cong Chen, Kim Batselier, Ching-Yun Ko, Ngai Wong
This work presents the matrix product operator RBM (MPORBM) that utilizes a tensor network generalization of Mv/TvRBM, preserves input formats in both the visible and hidden layers, and results in higher expressive power.
no code implementations • 9 Nov 2018 • Ching-Yun Ko, Cong Chen, Yuke Zhang, Kim Batselier, Ngai Wong
Sum-product networks (SPNs) represent an emerging class of neural networks with clear probabilistic semantics and superior inference speed over graphical models.
1 code implementation • 17 Apr 2018 • Ching-Yun Ko, Kim Batselier, Wenjian Yu, Ngai Wong
We propose a new tensor completion method based on tensor trains.
no code implementations • 17 Apr 2018 • Cong Chen, Kim Batselier, Ching-Yun Ko, Ngai Wong
There has been growing interest in extending traditional vector-based machine learning techniques to their tensor forms.
1 code implementation • 20 Dec 2016 • Zhongming Chen, Kim Batselier, Johan A. K. Suykens, Ngai Wong
In pattern classification, polynomial classifiers are well-studied methods as they are capable of generating complex decision surfaces.
1 code implementation • 18 Oct 2016 • Kim Batselier, Zhongming Chen, Ngai Wong
This article introduces a Tensor Network Kalman filter, which can estimate state vectors that are exponentially large without ever having to explicitly construct them.
Systems and Control
1 code implementation • 7 Jul 2014 • Kim Batselier, Haotian Liu, Ngai Wong
We propose a constructive algorithm that decomposes an arbitrary real tensor into a finite sum of orthonormal rank-1 outer products.
Numerical Analysis Numerical Analysis