TY - UNPB
T1 - Function approximation by deep neural networks with parameters {0,±12,±1,2}
AU - Beknazaryan, Aleksandr
PY - 2021/3/15
Y1 - 2021/3/15
N2 - In this paper it is shown that $C_\beta$-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$. The $l_0$ and $l_1$ parameter norms of considered networks are thus equivalent. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$. In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in $[-1,1]$.
AB - In this paper it is shown that $C_\beta$-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$. The $l_0$ and $l_1$ parameter norms of considered networks are thus equivalent. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$. In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in $[-1,1]$.
KW - stat.ML
KW - cs.LG
U2 - 10.48550/arXiv.2103.08659
DO - 10.48550/arXiv.2103.08659
M3 - Preprint
BT - Function approximation by deep neural networks with parameters {0,±12,±1,2}
PB - ArXiv.org
ER -