1 code implementation • 28 Dec 2023 • Jingbo Lin, Zhilu Zhang, Yuxiang Wei, Dongwei Ren, Dongsheng Jiang, WangMeng Zuo
To address the cross-modal assistance, we propose to map the degraded images into textual representations for removing the degradations, and then convert the restored textual representations into a guidance image for assisting image restoration.
1 code implementation • 13 Sep 2023 • Dongwei Ren, Wei Shang, Yi Yang, WangMeng Zuo
To aggregate long-term sharp features from detected sharp frames, we utilize a global Transformer with multi-scale matching capability.
2 code implementations • ICCV 2023 • Wei Shang, Dongwei Ren, Chaoyu Feng, Xiaotao Wang, Lei Lei, WangMeng Zuo
In this paper, we propose a Self-supervised learning framework for Dual reversed RS distortions Correction (SelfDRSC), where a DRSC network can be learned to generate a high framerate GS video only based on dual RS images with reversed distortions.
2 code implementations • CVPR 2023 • Wei Shang, Dongwei Ren, Yi Yang, Hongzhi Zhang, Kede Ma, WangMeng Zuo
Moreover, on the seemingly implausible x16 interpolation task, our method outperforms existing methods by more than 1. 5 dB in terms of PSNR.
2 code implementations • 26 Nov 2022 • Yu Li, Dongwei Ren, Xinya Shu, WangMeng Zuo
First, in the deblurring module, a bi-directional optical flow-based deformation is introduced to tolerate spatial misalignment between deblurred and ground-truth images.
1 code implementation • 8 Jun 2022 • Pengju Liu, Hongzhi Zhang, Jinghui Wang, Yuzhi Wang, Dongwei Ren, WangMeng Zuo
In particular, we take well-trained CBDNet, NBNet, HINet, Uformer and GMSNet into denoiser pool, and a U-Net is adopted to predict pixel-wise weighting maps to fuse these denoisers.
1 code implementation • 26 Apr 2022 • Yu Li, Yaling Yi, Dongwei Ren, Qince Li, WangMeng Zuo
Generally, DPANet is an encoder-decoder with skip-connections, where two branches with shared parameters in the encoder are employed to extract and align deep features from left and right views, and one decoder is adopted to fuse aligned features for predicting the sharp image.
2 code implementations • 20 Apr 2022 • Ren Yang, Radu Timofte, Meisong Zheng, Qunliang Xing, Minglang Qiao, Mai Xu, Lai Jiang, Huaida Liu, Ying Chen, Youcheng Ben, Xiao Zhou, Chen Fu, Pei Cheng, Gang Yu, Junyi Li, Renlong Wu, Zhilu Zhang, Wei Shang, Zhengyao Lv, Yunjin Chen, Mingcai Zhou, Dongwei Ren, Kai Zhang, WangMeng Zuo, Pavel Ostyakov, Vyal Dmitry, Shakarim Soltanayev, Chervontsev Sergey, Zhussip Magauiya, Xueyi Zou, Youliang Yan, Pablo Navarrete Michelini, Yunhua Lu, Diankai Zhang, Shaoli Liu, Si Gao, Biao Wu, Chengjian Zheng, Xiaofeng Zhang, Kaidi Lu, Ning Wang, Thuong Nguyen Canh, Thong Bach, Qing Wang, Xiaopeng Sun, Haoyu Ma, Shijie Zhao, Junlin Li, Liangbin Xie, Shuwei Shi, Yujiu Yang, Xintao Wang, Jinjin Gu, Chao Dong, Xiaodi Shi, Chunmei Nian, Dong Jiang, Jucai Lin, Zhihuai Xie, Mao Ye, Dengyan Luo, Liuhan Peng, Shengjie Chen, Qian Wang, Xin Liu, Boyang Liang, Hang Dong, Yuhao Huang, Kai Chen, Xingbei Guo, Yujing Sun, Huilei Wu, Pengxu Wei, Yulin Huang, Junying Chen, Ik Hyun Lee, Sunder Ali Khowaja, Jiseok Yoon
This challenge includes three tracks.
no code implementations • CVPR 2022 • Yue Cao, Zhaolin Wan, Dongwei Ren, Zifei Yan, WangMeng Zuo
Particularly, by treating all labeled data as positive samples, PU learning is leveraged to identify negative samples (i. e., outliers) from unlabeled data.
1 code implementation • 12 Apr 2022 • Zhaohui Zheng, Rongguang Ye, Qibin Hou, Dongwei Ren, Ping Wang, WangMeng Zuo, Ming-Ming Cheng
Combining these two new components, for the first time, we show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years.
1 code implementation • 9 Mar 2021 • Chaohao Xie, Dongwei Ren, Lei Wang, WangMeng Zuo
For learning pseudo mask generator from the auxiliary dataset, we present a bi-level optimization formulation.
2 code implementations • CVPR 2022 • Zhaohui Zheng, Rongguang Ye, Ping Wang, Dongwei Ren, WangMeng Zuo, Qibin Hou, Ming-Ming Cheng
Previous KD methods for object detection mostly focus on imitating deep features within the imitation regions instead of mimicking classification logit due to its inefficiency in distilling localization information and trivial improvement.
1 code implementation • ICCV 2021 • Wei Shang, Dongwei Ren, Dongqing Zou, Jimmy S. Ren, Ping Luo, WangMeng Zuo
EFM can also be easily incorporated into existing deblurring networks, making event-driven deblurring task benefit from state-of-the-art deblurring methods.
1 code implementation • 2 Dec 2020 • Yu Li, Ming Liu, Yaling Yi, Qince Li, Dongwei Ren, WangMeng Zuo
To be specific, the reflection layer is firstly estimated due to that it generally is much simpler and is relatively easier to estimate.
2 code implementations • ECCV 2020 • Xiaohe Wu, Ming Liu, Yue Cao, Dongwei Ren, WangMeng Zuo
As for knowledge distillation, we first apply the learned noise models to clean images to synthesize a paired set of training images, and use the real noisy images and the corresponding denoising results in the first stage to form another paired set.
6 code implementations • 7 May 2020 • Zhaohui Zheng, Ping Wang, Dongwei Ren, Wei Liu, Rongguang Ye, QinGhua Hu, WangMeng Zuo
In this paper, we propose Complete-IoU (CIoU) loss and Cluster-NMS for enhancing geometric factors in both bounding box regression and Non-Maximum Suppression (NMS), leading to notable gains of average precision (AP) and average recall (AR), without the sacrifice of inference efficiency.
1 code implementation • CVPR 2020 • Qilong Wang, Li Zhang, Banggu Wu, Dongwei Ren, Peihua Li, WangMeng Zuo, QinGhua Hu
Recent works have demonstrated that global covariance pooling (GCP) has the ability to improve performance of deep convolutional neural networks (CNNs) on visual classification task.
20 code implementations • 19 Nov 2019 • Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, Dongwei Ren
By incorporating DIoU and CIoU losses into state-of-the-art object detection algorithms, e. g., YOLO v3, SSD and Faster RCNN, we achieve notable performance gains in terms of not only IoU metric but also GIoU metric.
1 code implementation • CVPR 2020 • Dongwei Ren, Kai Zhang, Qilong Wang, QinGhua Hu, WangMeng Zuo
To connect MAP and deep models, we in this paper present two generative networks for respectively modeling the deep priors of clean image and blur kernel, and propose an unconstrained neural optimization solution to blind deconvolution.
1 code implementation • 16 Jun 2019 • Jun Xu, Yingkun Hou, Dongwei Ren, Li Liu, Fan Zhu, Mengyang Yu, Haoqian Wang, Ling Shao
A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image.
4 code implementations • CVPR 2019 • Dongwei Ren, WangMeng Zuo, QinGhua Hu, Pengfei Zhu, Deyu Meng
To handle this issue, this paper provides a better and simpler baseline deraining network by considering network architecture, input and output, and loss functions.
Ranked #1 on Single Image Deraining on Rain1400
no code implementations • 11 Dec 2018 • Tianyu Zhao, Wenqi Ren, Changqing Zhang, Dongwei Ren, QinGhua Hu
Specifically, we propose a degradation network to model the real-world degradation process from HR to LR via generative adversarial networks, and these generated realistic LR images paired with real-world HR images are exploited for training the SR reconstruction network, forming the first cycle.
1 code implementation • 17 Apr 2018 • Li Si-Yao, Dongwei Ren, Furong Zhao, Zijian Hu, Junfeng Li, Qian Yin
Image deblurring, a. k. a.
1 code implementation • 12 Apr 2018 • Dongwei Ren, WangMeng Zuo, David Zhang, Lei Zhang, Ming-Hsuan Yang
For blind deconvolution, as estimation error of blur kernel is usually introduced, the subsequent non-blind deconvolution process does not restore the latent image well.
1 code implementation • 6 Jun 2017 • Li Si-Yao, Dongwei Ren, Qian Yin
In this paper, we first theoretically and experimentally analyze the mechanism to estimation error in oversized kernel, and show that it holds even on blurry images without noises.
no code implementations • CVPR 2015 • Wangmeng Zuo, Dongwei Ren, Shuhang Gu, Liang Lin, Lei Zhang
The maximum a posterior (MAP)-based blind deconvolution framework generally involves two stages: blur kernel estimation and non-blind restoration.