Expressive power of binary and ternary neural networks

27 Jun 2022  ·  Aleksandr Beknazaryan ·

We show that deep sparse ReLU networks with ternary weights and deep ReLU networks with binary weights can approximate $\beta$-H\"older functions on $[0,1]^d$. Also, for any interval $[a,b)\subset\mathbb{R}$, continuous functions on $[0,1]^d$ can be approximated by networks of depth $2$ with binary activation function $\mathds{1}_{[a,b)}$.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here