no code implementations • 1 Sep 2023 • Burak Bartan, Mert Pilanci
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
1 code implementation • 27 Apr 2023 • Burak Bartan, Haoming Li, Harris Teague, Christopher Lott, Bistra Dilkina
The deployment and training of neural networks on edge computing devices pose many challenges.
no code implementations • 18 Mar 2022 • Burak Bartan, Mert Pilanci
Furthermore, we develop unbiased parameter averaging methods for randomized second order optimization for regularized problems that employ sketching of the Hessian.
1 code implementation • ICLR 2022 • Arda Sahiner, Tolga Ergen, Batu Ozturkler, Burak Bartan, John Pauly, Morteza Mardani, Mert Pilanci
In this work, we analyze the training of Wasserstein GANs with two-layer neural network discriminators through the lens of convex duality, and for a variety of generators expose the conditions under which Wasserstein GANs can be solved exactly with convex optimization approaches, or can be represented as convex-concave games.
no code implementations • 4 May 2021 • Burak Bartan, Mert Pilanci
Neural networks (NNs) have been extremely successful across many tasks in machine learning.
no code implementations • 7 Jan 2021 • Burak Bartan, Mert Pilanci
In this paper, we develop exact convex optimization formulations for two-layer neural networks with second degree polynomial activations based on semidefinite programming.
no code implementations • NeurIPS 2020 • Michał Dereziński, Burak Bartan, Mert Pilanci, Michael W. Mahoney
In distributed second order optimization, a standard strategy is to average many local estimates, each of which is based on a small sketch or batch of the data.
no code implementations • 16 Feb 2020 • Burak Bartan, Mert Pilanci
In this work, we study distributed sketching methods for large scale regression problems.
no code implementations • 16 Feb 2020 • Burak Bartan, Mert Pilanci
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a significant bottleneck.
no code implementations • 13 Jul 2019 • Burak Bartan, Mert Pilanci
We introduce a novel distributed derivative-free optimization framework that is resilient to stragglers.
no code implementations • 21 Jan 2019 • Burak Bartan, Mert Pilanci
We propose a serverless computing mechanism for distributed computation based on polar codes.
no code implementations • 31 Dec 2018 • Burak Bartan, Mert Pilanci
We propose convex relaxations for convolutional neural nets with one hidden layer where the output weights are fixed.