Learning Fourier-Constrained Diffusion Bridges for MRI Reconstruction

Deep generative models have gained recent traction in accelerated MRI reconstruction. Diffusion priors are particularly promising given their representational fidelity. Instead of the target transformation from undersampled to fully-sampled data required for MRI reconstruction, common diffusion priors are trained to learn a task-agnostic transformation from an asymptotic start-point of Gaussian noise onto the finite end-point of fully-sampled data. During inference, data-consistency projections are injected in between reverse diffusion steps to reach a compromise solution within the span of both the trained diffusion prior and the imaging operator for an accelerated MRI acquisition. Unfortunately, performance losses can occur due to the discrepancy between target and learned transformations given the asymptotic normality assumption in diffusion priors. To address this discrepancy, here we introduce a novel Fourier-constrained diffusion bridge (FDB) for MRI reconstruction that transforms between a finite start-point of moderately undersampled data and an end-point of fully-sampled data. We derive the theoretical formulation of FDB as a generalized diffusion process based on a stochastic degradation operator that performs random spatial-frequency removal. We propose an enhanced sampling algorithm with a learned correction term for soft dealiasing across reverse diffusion steps. Demonstrations on brain MRI indicate that FDB outperforms state-of-the-art methods including non-diffusion and diffusion priors.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods