no code implementations • 17 Aug 2022 • Aleksandr Beknazaryan
We show that $d$-variate polynomials of degree $R$ can be represented on $[0, 1]^d$ as shallow neural networks of width $2(R+d)^d$.
no code implementations • 17 Jul 2022 • Aleksandr Beknazaryan, Hailin Sang
We consider regression estimation with modified ReLU neural networks in which network weight matrices are first modified by a function $\alpha$ before being multiplied by input vectors.
no code implementations • 27 Jun 2022 • Aleksandr Beknazaryan
We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate $\beta$-H\"older functions on $[0, 1]^d$.
no code implementations • 20 May 2021 • Aleksandr Beknazaryan
An example of an activation function $\sigma$ is given such that networks with activations $\{\sigma, \lfloor\cdot\rfloor\}$, integer weights and a fixed architecture depending on $d$ approximate continuous functions on $[0, 1]^d$.
no code implementations • 5 Apr 2021 • Aleksandr Beknazaryan
We show that neural networks with absolute value activation function and with the path norm, the depth, the width and the network weights having logarithmic dependence on $1/\varepsilon$ can $\varepsilon$-approximate functions that are analytic on certain regions of $\mathbb{C}^d$.
no code implementations • 15 Mar 2021 • Aleksandr Beknazaryan
In this paper it is shown that $C_\beta$-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$.