no code implementations • 20 Jan 2024 • Bryan Kelly, Boris Kuznetsov, Semyon Malamud, Teng Andrea Xu
We open up the black box behind Deep Learning for portfolio optimization and prove that a sufficiently wide and arbitrarily deep neural network (DNN) trained to maximize the Sharpe ratio of the Stochastic Discount Factor (SDF) is equivalent to a large factor model (LFM): A linear factor pricing model that uses many non-linear characteristics.
1 code implementation • 26 Jan 2023 • Teng Andrea Xu, Bryan Kelly, Semyon Malamud
The recent discovery of the equivalence between infinitely wide neural networks (NNs) in the lazy training regime and Neural Tangent Kernels (NTKs) (Jacot et al., 2018) has revived interest in kernel methods.
1 code implementation • 2 Oct 2022 • Semyon Malamud, Teng Andrea Xu, Antoine Didisheim
Recent progress in Generative Artificial Intelligence (AI) relies on efficient data representations, often featuring encoder-decoder architectures.
no code implementations • 10 Mar 2022 • Antoine Didisheim, Bryan Kelly, Semyon Malamud
Each layer of DRE has two components, randomly drawn input weights and output weights trained myopically (as if the final output layer) using linear ridge regression.
no code implementations • 17 Oct 2021 • Semyon Malamud, Andreas Schrimpf
When the sender's marginal utility is linear, revealing the full magnitude of good information is always optimal.
no code implementations • 22 Feb 2021 • Semyon Malamud, Anna Cieslak, Andreas Schrimpf
We study the general problem of Bayesian persuasion (optimal information design) with continuous actions and continuous state space in arbitrary dimensions.