Search Results for author: Katharina Pohl

Found 1 papers, 0 papers with code

Convergence proof for stochastic gradient descent in the training of deep neural networks with ReLU activation for constant target functions

no code implementations13 Dec 2021 Martin Hutzenthaler, Arnulf Jentzen, Katharina Pohl, Adrian Riekert, Luca Scarpa

In many numerical simulations stochastic gradient descent (SGD) type optimization methods perform very effectively in the training of deep neural networks (DNNs) but till this day it remains an open problem of research to provide a mathematical convergence analysis which rigorously explains the success of SGD type optimization methods in the training of DNNs.

Cannot find the paper you are looking for? You can Submit a new open access paper.