Paper

Lightweight network towards real-time image denoising on mobile devices

Deep convolutional neural networks have achieved great progress in image denoising tasks. However, their complicated architectures and heavy computational cost hinder their deployments on mobile devices. Some recent efforts in designing lightweight denoising networks focus on reducing either FLOPs (floating-point operations) or the number of parameters. However, these metrics are not directly correlated with the on-device latency. In this paper, we identify the real bottlenecks that affect the CNN-based models' run-time performance on mobile devices: memory access cost and NPU-incompatible operations, and build the model based on these. To further improve the denoising performance, the mobile-friendly attention module MFA and the model reparameterization module RepConv are proposed, which enjoy both low latency and excellent denoising performance. To this end, we propose a mobile-friendly denoising network, namely MFDNet. The experiments show that MFDNet achieves state-of-the-art performance on real-world denoising benchmarks SIDD and DND under real-time latency on mobile devices. The code and pre-trained models will be released.

Results in Papers With Code
(↓ scroll down to see all results)