no code implementations • 3 Aug 2022 • Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek
Dynamical systems theory has recently been applied in optimization to prove that gradient descent algorithms avoid so-called strict saddle points of the loss function.
no code implementations • 3 Dec 2021 • Patrick Cheridito, Balint Gersey
Theoretically, the conditional expectation of a square-integrable random variable $Y$ given a $d$-dimensional random vector $X$ can be obtained by minimizing the mean squared distance between $Y$ and $f(X)$ over all Borel measurable functions $f \colon \mathbb{R}^d \to \mathbb{R}$.
no code implementations • 26 May 2021 • Patrick Cheridito, John Ery, Mario V. Wüthrich
We introduce a neural network approach for assessing the risk of a portfolio of assets and liabilities over a given time period.
no code implementations • 19 Mar 2021 • Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek
In this paper, we analyze the landscape of the true loss of neural networks with one hidden layer and ReLU, leaky ReLU, or quadratic activation.
no code implementations • 19 Feb 2021 • Patrick Cheridito, Arnulf Jentzen, Adrian Riekert, Florian Rossmannek
This Lyapunov function is the central tool in our convergence proof of the gradient descent method.
no code implementations • 2 Dec 2020 • Christian Beck, Sebastian Becker, Patrick Cheridito, Arnulf Jentzen, Ariel Neufeld
In this article we introduce and study a deep learning based approximation algorithm for solutions of stochastic partial differential equations (SPDEs).
no code implementations • 12 Jun 2020 • Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek
Deep neural networks have successfully been trained in various application areas with stochastic gradient descent.
1 code implementation • 23 Dec 2019 • Sebastian Becker, Patrick Cheridito, Arnulf Jentzen
In this paper we introduce a deep learning method for pricing and hedging American-style options.
no code implementations • 9 Dec 2019 • Patrick Cheridito, Arnulf Jentzen, Florian Rossmannek
In this paper, we develop a framework for showing that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems.
Numerical Analysis Numerical Analysis 68T07 I.2.0
no code implementations • 5 Aug 2019 • Sebastian Becker, Patrick Cheridito, Arnulf Jentzen, Timo Welti
We present numerical results for a large number of example problems, which include the pricing of many high-dimensional American and Bermudan options, such as Bermudan max-call options in up to 5000 dimensions.
no code implementations • 8 Jul 2019 • Christian Beck, Sebastian Becker, Patrick Cheridito, Arnulf Jentzen, Ariel Neufeld
In this paper we introduce a numerical method for nonlinear parabolic PDEs that combines operator splitting with deep learning.