no code implementations • 13 Oct 2020 • Nadav Hallak, Panayotis Mertikopoulos, Volkan Cevher
In this setting, the minimization of external regret is beyond reach for first-order methods, so we focus on a local regret measure defined via a proximal-gradient mapping.
no code implementations • 2 Jul 2020 • Fabian Latorre, Paul Rolland, Nadav Hallak, Volkan Cevher
We demonstrate two new important properties of the 1-path-norm of shallow neural networks.
no code implementations • NeurIPS 2020 • Panayotis Mertikopoulos, Nadav Hallak, Ali Kavis, Volkan Cevher
This paper analyzes the trajectories of stochastic gradient descent (SGD) to help understand the algorithm's convergence properties in non-convex problems.