Clean self-supervised MRI reconstruction from noisy, sub-sampled training data with Robust SSDU

4 Oct 2022  ·  Charles Millard, Mark Chiew ·

Most existing methods for Magnetic Resonance Imaging (MRI) reconstruction with deep learning use fully supervised training, which assumes that a high signal-to-noise ratio (SNR), fully sampled dataset is available for training. In many circumstances, however, such a dataset is highly impractical or even technically infeasible to acquire. Recently, a number of self-supervised methods for MR reconstruction have been proposed, which use sub-sampled data only. However, the majority of such methods, such as Self-Supervised Learning via Data Undersampling (SSDU), are susceptible to reconstruction errors arising from noise in the measured data. In response, we propose Robust SSDU, which provably recovers clean images from noisy, sub-sampled training data by simultaneously estimating missing k-space samples and denoising the available samples. Robust SSDU trains the reconstruction network to map from a further noisy and sub-sampled version of the data to the original, singly noisy and sub-sampled data, and applies an additive Noisier2Noise correction term at inference. We also present a related method, Noiser2Full, that recovers clean images when noisy, fully sampled data is available for training. Both proposed methods are applicable to any network architecture, straight-forward to implement and have similar computational cost to standard training. We evaluate our methods on the multi-coil fastMRI brain dataset with a novel denoising-specific architecture and find that it performs competitively with a benchmark trained on clean, fully sampled data.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here