no code implementations • 3 Dec 2023 • Yiming Li, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin
We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs.
no code implementations • 18 Oct 2023 • Yuanyuan Wang, Yang Zhang, Zhiyong Wu, Zhihan Yang, Tao Wei, Kun Zou, Helen Meng
Existing augmentation methods for speaker verification manipulate the raw signal, which are time-consuming and the augmented samples lack diversity.
no code implementations • 20 May 2023 • Longkang Peng, Tao Wei, Xuehong Chen, Xiaobei Chen, Rui Sun, Luoma Wan, Jin Chen, Xiaolin Zhu
However, the distribution of real-world human-annotated label noises on remote sensing images and their impact on ConvNets have not been investigated.
no code implementations • ICCV 2023 • Xue Wang, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei, Kui Ren
Considering the insufficient study on such complex causal questions, we make the first attempt to explain different causal questions by contrastive explanations in a unified framework, ie., Counterfactual Contrastive Explanation (CCE), which visually and intuitively explains the aforementioned questions via a novel positive-negative saliency-based explanation scheme.
1 code implementation • 4 Aug 2022 • Yiming Li, Mingyan Zhu, Xue Yang, Yong Jiang, Tao Wei, Shu-Tao Xia
The rapid development of DNNs has benefited from the existence of some high-quality datasets ($e. g.$, ImageNet), which allow researchers and developers to easily verify the performance of their methods.
no code implementations • CVPR 2022 • Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, Kui Ren
Prioritizing fairness is of central importance in artificial intelligence (AI) systems, especially for those societal applications, e. g., hiring systems should recommend applicants equally from different demographic groups, and risk assessment systems must eliminate racism in criminal justice.
no code implementations • 29 Sep 2021 • Tao Wei, Yonghong Tian, YaoWei Wang, Yun Liang, Chang Wen Chen
In this research, we propose a novel and principled operator called optimized separable convolution by optimal design for the internal number of groups and kernel sizes for general separable convolutions can achieve the complexity of O(C^{\frac{3}{2}}K).
no code implementations • 1 May 2021 • Zhenyu Xu, Thomas Mauldin, Zheyi Yao, Gerald Hefferman, Tao Wei
These results demonstrate that a fully reconfigurable and highly integrated TDR (iTDR) can be implemented on a field-programmable gate array (FPGA) chip without using any external circuit components.
no code implementations • 20 Jan 2021 • Tao Wei, Angelica I Aviles-Rivero, Shuo Wang, Yuan Huang, Fiona J Gilbert, Carola-Bibiane Schönlieb, Chang Wen Chen
The current state-of-the-art approaches for medical image classification rely on using the de-facto method for ConvNets - fine-tuning.
Cancer-no cancer per image classification Image Classification +3
no code implementations • 1 Jan 2021 • Tao Wei, Yonghong Tian, Chang Wen Chen
In this research, we propose a novel operator called \emph{optimal separable convolution} which can be calculated at $O(C^{\frac{3}{2}}KHW)$ by optimal design for the internal number of groups and kernel sizes for general separable convolutions.
1 code implementation • ICLR 2020 • Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei
Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.
no code implementations • 23 Aug 2019 • Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei
Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.
no code implementations • 19 Jun 2019 • Dou Goodman, Tao Wei
Many recent works demonstrated that Deep Learning models are vulnerable to adversarial examples. Fortunately, generating adversarial examples usually requires white-box access to the victim model, and the attacker can only access the APIs opened by cloud platforms.
1 code implementation • 27 May 2019 • Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei
Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.
1 code implementation • 8 May 2019 • Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei
Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they maintain their effectiveness even against other models.
no code implementations • ICLR 2018 • Tao Wei, Changhu Wang, Chang Wen Chen
In this research, we present a novel learning scheme called network iterative learning for deep neural networks.
no code implementations • 12 Jan 2017 • Tao Wei, Changhu Wang, Chang Wen Chen
Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i. e., how a convolutional layer can be morphed into an arbitrary module of a neural network.
no code implementations • 5 Mar 2016 • Tao Wei, Changhu Wang, Yong Rui, Chang Wen Chen
The second requirement for this network morphism is its ability to deal with non-linearity in a network.