no code implementations • 28 May 2024 • Camille Castera, Peter Ochs
Towards designing learned optimization algorithms that are usable beyond their training setting, we identify key principles that classical algorithms obey, but have up to now, not been used for Learning to Optimize (L2O).
1 code implementation • 16 Nov 2023 • Severin Maier, Camille Castera, Peter Ochs
We introduce an autonomous system with closed-loop damping for first-order convex optimization.
1 code implementation • 8 Nov 2021 • Camille Castera
We study the asymptotic behavior of second-order algorithms mixing Newton's method and inertial gradient descent in non-convex landscapes.
1 code implementation • 5 Mar 2021 • Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels
In view of a direct and simple improvement of vanilla SGD, this paper presents a fine-tuning of its step-sizes in the mini-batch case.
2 code implementations • 29 May 2019 • Camille Castera, Jérôme Bolte, Cédric Févotte, Edouard Pauwels
We prove the convergence of INNA for most deep learning problems.