Abstract
In this paper, it is shown that Cβ-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters {0,±12,±1,2}
. The l0 and l1 parameter norms of considered networks are thus equivalent. The depth, the width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in [−1,1]. In particular, this implies that the nonparametric regression estimation with constructed networks achieves, up to logarithmic factors, the same minimax convergence rates as with sparse networks with parameters in [−1,1].
. The l0 and l1 parameter norms of considered networks are thus equivalent. The depth, the width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in [−1,1]. In particular, this implies that the nonparametric regression estimation with constructed networks achieves, up to logarithmic factors, the same minimax convergence rates as with sparse networks with parameters in [−1,1].
| Original language | English |
|---|---|
| Number of pages | 14 |
| Journal | Journal of Statistical Theory and Practice |
| Volume | 16 |
| Issue number | 1 |
| Early online date | 19 Jan 2022 |
| DOIs | |
| Publication status | Published - Mar 2022 |
Keywords
- UT-Hybrid-D
Fingerprint
Dive into the research topics of 'Function Approximation by Deep Neural Networks with Parameters $$\{0,\pm \frac{1}{2}, \pm 1, 2\}$$'. Together they form a unique fingerprint.Research output
- 4 Citations
- 1 Preprint
-
Function approximation by deep neural networks with parameters {0,±12,±1,2}
Beknazaryan, A., 15 Mar 2021, ArXiv.org.Research output: Working paper › Preprint › Academic
Open AccessFile8 Downloads (Pure)
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver