Search Results for author: Sebastian Kassing

Found 6 papers, 0 papers with code

Stochastic Modified Flows for Riemannian Stochastic Gradient Descent

no code implementations2 Feb 2024 Benjamin Gess, Sebastian Kassing, Nimit Rana

We give quantitative estimates for the rate of convergence of Riemannian stochastic gradient descent (RSGD) to Riemannian gradient flow and to a diffusion process, the so-called Riemannian stochastic modified flow (RSMF).

On the existence of optimal shallow feedforward networks with ReLU activation

no code implementations6 Mar 2023 Steffen Dereich, Sebastian Kassing

We prove existence of global minima in the loss landscape for the approximation of continuous target functions using shallow feedforward artificial neural networks with ReLU activation.

On the existence of minimizers in shallow residual ReLU neural network optimization landscapes

no code implementations28 Feb 2023 Steffen Dereich, Arnulf Jentzen, Sebastian Kassing

Many mathematical convergence results for gradient descent (GD) based algorithms employ the assumption that the GD process is (almost surely) bounded and, also in concrete numerical simulations, divergence of the GD process may slow down, or even completely rule out, convergence of the error function.

Stochastic Modified Flows, Mean-Field Limits and Dynamics of Stochastic Gradient Descent

no code implementations14 Feb 2023 Benjamin Gess, Sebastian Kassing, Vitalii Konarovskyi

We propose new limiting dynamics for stochastic gradient descent in the small learning rate regime called stochastic modified flows.

Convergence rates for momentum stochastic gradient descent with noise of machine learning type

no code implementations7 Feb 2023 Benjamin Gess, Sebastian Kassing

We consider the momentum stochastic gradient descent scheme (MSGD) and its continuous-in-time counterpart in the context of non-convex optimization.

Friction

Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes

no code implementations16 Feb 2021 Steffen Dereich, Sebastian Kassing

In this article, we consider convergence of stochastic gradient descent schemes (SGD), including momentum stochastic gradient descent (MSGD), under weak assumptions on the underlying landscape.

Cannot find the paper you are looking for? You can Submit a new open access paper.