no code implementations • 29 Feb 2024 • Fangyuan Zhang, Huichi Zhou, Shuangjiao Li, Hongtao Wang
Deep neural networks have been proven to be vulnerable to adversarial examples and various methods have been proposed to defend against adversarial attacks for natural language processing tasks.
1 code implementation • 3 Nov 2023 • Fangyuan Zhang, TingTing Liang, Zhengyuan Wu, Yuyu Yin
Recently, significant progress has been made in the development of Vision Language Models (VLMs), expanding the capabilities of LLMs and enabling them to execute more diverse instructions.
no code implementations • 26 Oct 2023 • Rui Qin, Ming Sun, Fangyuan Zhang, Xing Wen, Bin Wang
However, we find that a codebook based on HR reconstruction may not effectively capture the complex correlations between low-resolution (LR) and HR images.
no code implementations • 28 Jun 2023 • Guandu Liu, Fangyuan Zhang, Tianxiang Pan, Bin Wang
Reliable pseudo-labels from unlabeled data play a key role in semi-supervised object detection (SSOD).
no code implementations • 11 Jul 2021 • Fangyuan Zhang, Tianxiang Pan, Bin Wang
In current studies, we observe that class imbalance in SSOD severely impedes the effectiveness of self-training.
no code implementations • 25 Jun 2021 • An Chen, Motonobu Kanagawa, Fangyuan Zhang
We study a fully funded, collective defined-contribution (DC) pension system with multiple overlapping generations.