Function approximation by deep neural networks with parameters {0,±12,±1,2}

Aleksandr Beknazaryan

Research output: Working paperPreprintAcademic

4 Downloads (Pure)

Abstract

In this paper it is shown that $C_\beta$-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$. The $l_0$ and $l_1$ parameter norms of considered networks are thus equivalent. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$. In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in $[-1,1]$.
Original languageUndefined
PublisherArXiv.org
DOIs
Publication statusPublished - 15 Mar 2021

Keywords

  • stat.ML
  • cs.LG

Cite this