1 code implementation • 18 Jul 2022 • Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, Christopher M Sandino, Shreyas Vasanawala, John M Pauly, Morteza Mardani, Mert Pilanci
However, they require several iterations of a large neural network to handle high-dimensional imaging tasks such as 3D MRI.
no code implementations • 17 May 2022 • Arda Sahiner, Tolga Ergen, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci
Vision transformers using self-attention or its proposed alternatives have demonstrated promising results in many image related tasks.
1 code implementation • 21 Apr 2022 • Beliz Gunel, Arda Sahiner, Arjun D. Desai, Akshay S. Chaudhari, Shreyas Vasanawala, Mert Pilanci, John Pauly
Unrolled neural networks have enabled state-of-the-art reconstruction performance and fast inference times for the accelerated magnetic resonance imaging (MRI) reconstruction task.
1 code implementation • 2 Feb 2022 • Aaron Mishkin, Arda Sahiner, Mert Pilanci
We develop fast algorithms and robust software for convex optimization of two-layer neural networks with ReLU activation functions.
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Batu Ozturkler, Arda Sahiner, Tolga Ergen, Arjun D Desai, John M. Pauly, Shreyas Vasanawala, Morteza Mardani, Mert Pilanci
Model-based deep learning approaches have recently shown state-of-the-art performance for accelerated MRI reconstruction.
1 code implementation • ICLR 2022 • Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John Pauly, Morteza Mardani, Mert Pilanci
In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games.
no code implementations • ICLR 2022 • Tolga Ergen, Arda Sahiner, Batu Ozturkler, John Pauly, Morteza Mardani, Mert Pilanci
Batch Normalization (BN) is a commonly used technique to accelerate and stabilize training of deep neural networks.
no code implementations • ICLR 2021 • Arda Sahiner, Tolga Ergen, John Pauly, Mert Pilanci
We describe the convex semi-infinite dual of the two-layer vector-output ReLU neural network training problem.
no code implementations • ICLR 2021 • Arda Sahiner, Morteza Mardani, Batu Ozturkler, Mert Pilanci, John Pauly
Neural networks have shown tremendous potential for reconstructing high-resolution images in inverse problems.