1 code implementation • 20 Nov 2023 • JieZhang Cao, Yue Shi, Kai Zhang, Yulun Zhang, Radu Timofte, Luc van Gool
Due to the inherent property of diffusion models, most existing methods need long serial sampling chains to restore HQ images step-by-step, resulting in expensive sampling time and high computation costs.
no code implementations • 19 Nov 2023 • Zhenghao Pan, Haijin Zeng, JieZhang Cao, Kai Zhang, Yongyong Chen
Specifically, firstly, we employ a pre-trained diffusion model, which has been trained on a substantial corpus of RGB images, as the generative denoiser within the Plug-and-Play framework for the first time.
2 code implementations • 15 May 2023 • Yuanzhi Zhu, Kai Zhang, Jingyun Liang, JieZhang Cao, Bihan Wen, Radu Timofte, Luc van Gool
Although diffusion models have shown impressive performance for high-quality image synthesis, their potential to serve as a generative denoiser prior to the plug-and-play IR methods remains to be further explored.
no code implementations • 6 May 2023 • Haijin Zeng, JieZhang Cao, Kai Feng, Shaoguang Huang, Hongyan zhang, Hiep Luong, Wilfried Philips
However, model-based approaches rely on hand-crafted priors and hyperparameters, while learning-based methods are incapable of estimating the inherent degradation patterns and noise distributions in the imaging procedure, which could inform supervised learning.
no code implementations • 23 Mar 2023 • Haijin Zeng, Kai Feng, JieZhang Cao, Shaoguang Huang, Yongqiang Zhao, Hiep Luong, Jan Aelterman, Wilfried Philips
DJRD includes a newly designed Quad Bayer remosaicing (QB-Re) block, integrated denoising modules based on Swin-transformer and multi-scale wavelet transform.
no code implementations • 23 Mar 2023 • Haijin Zeng, Kai Feng, Shaoguang Huang, JieZhang Cao, Yongyong Chen, Hongyan zhang, Hiep Luong, Wilfried Philips
The advantage of Maformer is that it can leverage the MSFA information and non-local dependencies present in the data.
no code implementations • CVPR 2023 • Lei Sun, Christos Sakaridis, Jingyun Liang, Peng Sun, JieZhang Cao, Kai Zhang, Qi Jiang, Kaiwei Wang, Luc van Gool
The performance of video frame interpolation is inherently correlated with the ability to handle motion in the input scene.
1 code implementation • CVPR 2023 • JieZhang Cao, Qin Wang, Yongqin Xian, Yawei Li, Bingbing Ni, Zhiming Pi, Kai Zhang, Yulun Zhang, Radu Timofte, Luc van Gool
We explicitly design an implicit attention network to learn the ensemble weights for the nearby local features.
no code implementations • 25 Aug 2022 • JieZhang Cao, Qin Wang, Jingyun Liang, Yulun Zhang, Kai Zhang, Radu Timofte, Luc van Gool
To this end, we propose a new multi-scale refined optical flow-guided video denoising method, which is more robust to different noise levels.
Ranked #1 on Video Denoising on VideoLQ
1 code implementation • 25 Jul 2022 • JieZhang Cao, Jingyun Liang, Kai Zhang, Yawei Li, Yulun Zhang, Wenguan Wang, Luc van Gool
Reference-based image super-resolution (RefSR) aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
1 code implementation • 21 Jul 2022 • JieZhang Cao, Jingyun Liang, Kai Zhang, Wenguan Wang, Qin Wang, Yulun Zhang, Hao Tang, Luc van Gool
These issues can be alleviated by a cascade of three separate sub-tasks, including video deblurring, frame interpolation, and super-resolution, which, however, would fail to capture the spatial and temporal correlations among video sequences.
2 code implementations • 16 Jul 2022 • Yong Guo, Jingdong Wang, Qi Chen, JieZhang Cao, Zeshuai Deng, Yanwu Xu, Jian Chen, Mingkui Tan
Nevertheless, it is hard for existing model compression methods to accurately identify the redundant components due to the extremely large SR mapping space.
3 code implementations • 5 Jun 2022 • Jingyun Liang, Yuchen Fan, Xiaoyu Xiang, Rakesh Ranjan, Eddy Ilg, Simon Green, JieZhang Cao, Kai Zhang, Radu Timofte, Luc van Gool
Specifically, RVRT divides the video into multiple clips and uses the previously inferred clip feature to estimate the subsequent clip feature.
2 code implementations • 24 Mar 2022 • Kai Zhang, Yawei Li, Jingyun Liang, JieZhang Cao, Yulun Zhang, Hao Tang, Deng-Ping Fan, Radu Timofte, Luc van Gool
While recent years have witnessed a dramatic upsurge of exploiting deep neural networks toward solving image denoising, existing methods mostly rely on simple noise assumptions, such as additive white Gaussian noise (AWGN), JPEG compression noise and camera sensor noise, and a general-purpose blind denoising method for real images remains unsolved.
Ranked #1 on Image Denoising on urban100 sigma15
1 code implementation • 28 Jan 2022 • Jingyun Liang, JieZhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, Luc van Gool
Besides, parallel warping is used to further fuse information from neighboring frames by parallel feature warping.
Ranked #1 on Deblurring on BASED
9 code implementations • 23 Aug 2021 • Jingyun Liang, JieZhang Cao, Guolei Sun, Kai Zhang, Luc van Gool, Radu Timofte
In particular, the deep feature extraction module is composed of several residual Swin Transformer blocks (RSTB), each of which has several Swin Transformer layers together with a residual connection.
Ranked #2 on Color Image Denoising on urban100 sigma15
1 code implementation • 12 Jun 2021 • JieZhang Cao, Yawei Li, Kai Zhang, Luc van Gool
Specifically, to tackle the first issue, we present a spatial-temporal convolutional self-attention layer with a theoretical understanding to exploit the locality information.
2 code implementations • 12 Apr 2021 • Yawei Li, Kai Zhang, JieZhang Cao, Radu Timofte, Luc van Gool
The importance of locality mechanisms is validated in two ways: 1) A wide range of design choices (activation function, layer placement, expansion ratio) are available for incorporating locality mechanisms and all proper choices can lead to a performance gain over the baseline, and 2) The same locality mechanism is successfully applied to 4 vision transformers, which shows the generalization of the locality concept.
Ranked #623 on Image Classification on ImageNet
no code implementations • 13 Mar 2021 • Qicheng Wang, Shuhai Zhang, JieZhang Cao, Jincheng Li, Mingkui Tan, Yang Xiang
Existing attack methods often construct adversarial examples relying on some metrics like the $\ell_p$ distance to perturb samples.
1 code implementation • 13 Mar 2021 • Jincheng Li, JieZhang Cao, Yifan Zhang, Jian Chen, Mingkui Tan
Relying on this, we learn a defense transformer to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs.
1 code implementation • 28 Jul 2020 • Jiezhang Cao, Yong Guo, Qingyao Wu, Chunhua Shen, Junzhou Huang, Mingkui Tan
In this paper, rather than sampling from the predefined prior distribution, we propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
no code implementations • 31 Mar 2020 • Chendi Rao, JieZhang Cao, Runhao Zeng, Qi Chen, Huazhu Fu, Yanwu Xu, Mingkui Tan
In this paper, we aim to review various adversarial attack and defense methods on chest X-rays.
3 code implementations • CVPR 2020 • Yong Guo, Jian Chen, Jingdong Wang, Qi Chen, JieZhang Cao, Zeshuai Deng, Yanwu Xu, Mingkui Tan
Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.
3 code implementations • ECCV 2020 • Shoukai Xu, Haokun Li, Bohan Zhuang, Jing Liu, JieZhang Cao, Chuangrun Liang, Mingkui Tan
More critically, our method achieves much higher accuracy on 4-bit quantization than the existing data free quantization method.
Ranked #2 on Data Free Quantization on CIFAR-100
no code implementations • 1 Mar 2020 • JieZhang Cao, Langyuan Mo, Qing Du, Yong Guo, Peilin Zhao, Junzhou Huang, Mingkui Tan
However, the resultant optimization problem is still intractable.
1 code implementation • 18 Nov 2019 • Yifan Zhang, Peilin Zhao, Shuaicheng Niu, Qingyao Wu, JieZhang Cao, Junzhou Huang, Mingkui Tan
In these problems, there are two key challenges: the query budget is often limited; the ratio between classes is highly imbalanced.
3 code implementations • NeurIPS 2019 • Jiezhang Cao, Langyuan Mo, Yifan Zhang, Kui Jia, Chunhua Shen, Mingkui Tan
Multiple marginal matching problem aims at learning mappings to match a source domain to multiple target domains and it has attracted great attention in many applications, such as multi-domain image translation.
no code implementations • 25 Sep 2019 • JieZhang Cao, Jincheng Li, Xiping Hu, Peilin Zhao, Mingkui Tan
ii) the $W$-distance of a specific layer to the target distribution tends to decrease along training iterations.
no code implementations • 27 Sep 2018 • JieZhang Cao, Yong Guo, Langyuan Mo, Peilin Zhao, Junzhou Huang, Mingkui Tan
We study the joint distribution matching problem which aims at learning bidirectional mappings to match the joint distribution of two domains.
Open-Ended Question Answering Unsupervised Image-To-Image Translation +2
no code implementations • 19 Sep 2018 • Yong Guo, Qi Chen, Jian Chen, Junzhou Huang, Yanwu Xu, JieZhang Cao, Peilin Zhao, Mingkui Tan
However, most deep learning methods employ feed-forward architectures, and thus the dependencies between LR and HR images are not fully exploited, leading to limited learning performance.
no code implementations • ICML 2018 • Jiezhang Cao, Yong Guo, Qingyao Wu, Chunhua Shen, Junzhou Huang, Mingkui Tan
Generative adversarial networks (GANs) aim to generate realistic data from some prior distribution (e. g., Gaussian noises).