no code implementations • 20 Dec 2023 • Ahmed Abdeljawad, Philipp Grohs
While it is well-known that neural networks enjoy excellent approximation capabilities, it remains a big challenge to compute such approximations from point samples.
1 code implementation • 15 Jul 2023 • Michael Scherbela, Leon Gerard, Philipp Grohs
Obtaining accurate solutions to the Schr\"odinger equation is the key challenge in computational quantum chemistry.
1 code implementation • 4 Apr 2023 • Pavol Harar, Lukas Herrmann, Philipp Grohs, David Haselbach
A key shortcoming of these supervised learning methods is their need for large training data sets, typically generated from particle models in conjunction with complex numerical forward models simulating the physics of transmission electron microscopes.
4 code implementations • 17 Mar 2023 • Michael Scherbela, Leon Gerard, Philipp Grohs
Furthermore, we provide ample experimental evidence to support the idea that extensive pre-training of a such a generalized wavefunction model across different compounds and geometries could lead to a foundation wavefunction model.
1 code implementation • 26 May 2022 • Julius Berner, Philipp Grohs, Felix Voigtlaender
Statistical learning theory provides bounds on the necessary number of training samples needed to reach a prescribed accuracy in a learning problem formulated over a given target class.
2 code implementations • 19 May 2022 • Leon Gerard, Michael Scherbela, Philipp Marquetand, Philipp Grohs
Finding accurate solutions to the Schr\"odinger equation is the key unsolved challenge of computational chemistry.
no code implementations • 20 Dec 2021 • Ahmed Abdeljawad, Philipp Grohs
In this effort, we derive a formula for the integral representation of a shallow neural network with the Rectified Power Unit activation function.
no code implementations • 28 Oct 2021 • Philipp Grohs, Felix Voigtlaender
We consider neural network approximation spaces that classify functions according to the rate at which they can be approximated (with error measured in $L^p$) by ReLU neural networks with an increasing number of coefficients, subject to bounds on the magnitude of the coefficients and the number of hidden layers.
5 code implementations • 18 May 2021 • Michael Scherbela, Rafael Reisenhofer, Leon Gerard, Philipp Marquetand, Philipp Grohs
Accurate numerical solutions for the Schr\"odinger equation are of utmost importance in quantum chemistry.
no code implementations • 9 May 2021 • Julius Berner, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
We describe the new field of mathematical analysis of deep learning.
no code implementations • 6 Apr 2021 • Philipp Grohs, Felix Voigtlaender
Such algorithms (most prominently stochastic gradient descent and its variants) are used extensively in the field of deep learning.
no code implementations • 9 Mar 2021 • Philipp Grohs, Lukas Herrmann
The approximation of solutions to second order Hamilton--Jacobi--Bellman (HJB) equations by deep neural networks is investigated.
no code implementations • 23 Dec 2020 • Ahmed Abdeljawad, Philipp Grohs
Solutions of evolution equation generally lies in certain Bochner-Sobolev spaces, in which the solution may has regularity and integrability properties for the time variable that can be different for the space variables.
1 code implementation • NeurIPS 2020 • Julius Berner, Markus Dablander, Philipp Grohs
We show that a single deep neural network trained on simulated data is capable of learning the solution functions of an entire family of PDEs on a full space-time region.
no code implementations • 3 Aug 2020 • Philipp Grohs, Andreas Klotz, Felix Voigtlaender
We also provide quantitative and non-asymptotic bounds on the probability that a random $f\in\mathcal{S}$ can be encoded to within accuracy $\varepsilon$ using $R$ bits.
no code implementations • 10 Jul 2020 • Philipp Grohs, Lukas Herrmann
In recent work it has been established that deep neural networks are capable of approximating solutions to a large class of parabolic partial differential equations without incurring the curse of dimension.
no code implementations • 20 Nov 2019 • Lukas Gonon, Philipp Grohs, Arnulf Jentzen, David Kofler, David Šiška
These mathematical results from the scientific literature prove in part that algorithms based on ANNs are capable of overcoming the curse of dimensionality in the numerical approximation of high-dimensional PDEs.
1 code implementation • 28 Aug 2019 • Philipp Grohs, Arnulf Jentzen, Diyora Salimova
One key argument in most of these results is, first, to use a Monte Carlo approximation scheme which can approximate the solution of the PDE under consideration at a fixed space-time point without the curse of dimensionality and, thereafter, to prove that DNNs are flexible enough to mimic the behaviour of the used approximation scheme.
no code implementations • 11 Aug 2019 • Philipp Grohs, Fabian Hornung, Arnulf Jentzen, Philipp Zimmermann
It is the subject of the main result of this article to provide space-time error estimates for DNN approximations of Euler approximations of certain perturbed differential equations.
no code implementations • NeurIPS 2019 • Julius Berner, Dennis Elbrächter, Philipp Grohs
Approximation capabilities of neural networks can be used to deal with the latter non-convexity, which allows us to establish that for sufficiently large networks local minima of a regularized optimization problem on the realization space are almost optimal.
no code implementations • 13 May 2019 • Julius Berner, Dennis Elbrächter, Philipp Grohs, Arnulf Jentzen
Although for neural networks with locally Lipschitz continuous activation functions the classical derivative exists almost everywhere, the standard chain rule is in general not applicable.
no code implementations • 17 Jan 2019 • Dominik Alfke, Weston Baines, Jan Blechschmidt, Mauricio J. del Razo Sarmina, Amnon Drory, Dennis Elbrächter, Nando Farchmin, Matteo Gambara, Silke Glas, Philipp Grohs, Peter Hinz, Danijel Kivaranovic, Christian Kümmerle, Gitta Kutyniok, Sebastian Lunz, Jan Macdonald, Ryan Malthaner, Gregory Naisat, Ariel Neufeld, Philipp Christian Petersen, Rafael Reisenhofer, Jun-Da Sheng, Laura Thesing, Philipp Trunschke, Johannes von Lindheim, David Weber, Melanie Weber
We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results.
no code implementations • 8 Jan 2019 • Dennis Elbrächter, Dmytro Perekrestenko, Philipp Grohs, Helmut Bölcskei
This paper develops fundamental limits of deep neural network learning by characterizing what is possible if no constraints are imposed on the learning algorithm and on the amount of training data.
no code implementations • 9 Sep 2018 • Julius Berner, Philipp Grohs, Arnulf Jentzen
It can be concluded that ERM over deep neural network hypothesis classes overcomes the curse of dimensionality for the numerical solution of linear Kolmogorov equations with affine coefficients.
no code implementations • 7 Sep 2018 • Philipp Grohs, Fabian Hornung, Arnulf Jentzen, Philippe von Wurstemberger
Such numerical simulations suggest that ANNs have the capacity to very efficiently approximate high-dimensional functions and, especially, indicate that ANNs seem to admit the fundamental power to overcome the curse of dimensionality when approximating the high-dimensional functions appearing in the above named computational problems.
no code implementations • ICLR 2019 • Dmytro Perekrestenko, Philipp Grohs, Dennis Elbrächter, Helmut Bölcskei
We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (B\"olcskei et al., 2018) of polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable.
no code implementations • 1 Jun 2018 • Christian Beck, Sebastian Becker, Philipp Grohs, Nor Jaafari, Arnulf Jentzen
Stochastic differential equations (SDEs) and the Kolmogorov partial differential equations (PDEs) associated to them have been widely used in models from engineering, finance, and the natural sciences.
no code implementations • 10 Jul 2017 • Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei
Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes---for fixed network depth $N$---the average number of operationally significant nodes per layer.
no code implementations • 4 May 2017 • Helmut Bölcskei, Philipp Grohs, Gitta Kutyniok, Philipp Petersen
Specifically, all function classes that are optimally approximated by a general class of representation systems---so-called \emph{affine systems}---can be approximated by deep neural networks with minimal connectivity and memory requirements.
no code implementations • 12 Apr 2017 • Thomas Wiatowski, Philipp Grohs, Helmut Bölcskei
This paper establishes conditions for energy conservation (and thus for a trivial null-set) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates.
no code implementations • 26 May 2016 • Thomas Wiatowski, Michael Tschannen, Aleksandar Stanić, Philipp Grohs, Helmut Bölcskei
First steps towards a mathematical theory of deep convolutional neural networks for feature extraction were made---for the continuous-time case---in Mallat, 2012, and Wiatowski and B\"olcskei, 2015.
no code implementations • 29 Apr 2016 • Philipp Grohs, Thomas Wiatowski, Helmut Bölcskei
Wiatowski and B\"olcskei, 2015, proved that deformation stability and vertical translation invariance of deep convolutional neural network-based feature extractors are guaranteed by the network structure per se rather than the specific convolution kernels and non-linearities.