no code implementations • 13 Dec 2021 • Martin Hutzenthaler, Arnulf Jentzen, Katharina Pohl, Adrian Riekert, Luca Scarpa
In many numerical simulations stochastic gradient descent (SGD) type optimization methods perform very effectively in the training of deep neural networks (DNNs) but till this day it remains an open problem of research to provide a mathematical convergence analysis which rigorously explains the success of SGD type optimization methods in the training of DNNs.
no code implementations • 22 Dec 2020 • Christian Beck, Martin Hutzenthaler, Arnulf Jentzen, Benno Kuckuck
It is one of the most challenging problems in applied mathematics to approximatively solve high-dimensional partial differential equations (PDEs).