no code implementations • 23 May 2024 • Matthias Chung, Rick Archibald, Paul Atzberger, Jack Michael Solomon
Scientific datasets present unique challenges for machine learning-driven compression methods, including more stringent requirements on accuracy and mitigation of potential invalidating artifacts.
no code implementations • 21 May 2024 • Matthias Chung, Emma Hart, Julianne Chung, Bas Peters, Eldad Haber
We consider the solution of nonlinear inverse problems where the forward problem is a discretization of a partial differential equation.
no code implementations • 17 Apr 2023 • Babak Maboudi Afkham, Julianne Chung, Matthias Chung
In this work, we describe a new approach that uses variational encoder-decoder (VED) networks for efficient goal-oriented uncertainty quantification for inverse problems.
1 code implementation • 28 Sep 2021 • Elizabeth Newman, Julianne Chung, Matthias Chung, Lars Ruthotto
In the absence of theoretical guidelines or prior experience on similar tasks, this requires solving many training problems, which can be time-consuming and demanding on computational resources.
no code implementations • 14 Apr 2021 • Babak Maboudi Afkham, Julianne Chung, Matthias Chung
We emphasize that the key advantage of using DNNs for learning regularization parameters, compared to previous works on learning via optimal experimental design or empirical Bayes risk minimization, is greater generalizability.
no code implementations • 23 Feb 2017 • Julianne Chung, Matthias Chung, J. Tanner Slagel, Luis Tenorio
We describe stochastic Newton and stochastic quasi-Newton approaches to efficiently solve large linear least-squares problems where the very large data sets present a significant computational burden (e. g., the size may exceed computer memory or data are collected in real-time).