Function Approximation by Deep Neural Networks with Parameters $$\{0,\pm \frac{1}{2}, \pm 1, 2\}$$

  • Aleksandr Beknazaryan*
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

49 Downloads (Pure)

Abstract

In this paper, it is shown that Cβ-smooth functions can be approximated by deep neural networks with ReLU activation function and with parameters {0,±12,±1,2}
. The l0 and l1 parameter norms of considered networks are thus equivalent. The depth, the width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in [−1,1]. In particular, this implies that the nonparametric regression estimation with constructed networks achieves, up to logarithmic factors, the same minimax convergence rates as with sparse networks with parameters in [−1,1].
Original languageEnglish
Number of pages14
JournalJournal of Statistical Theory and Practice
Volume16
Issue number1
Early online date19 Jan 2022
DOIs
Publication statusPublished - Mar 2022

Keywords

  • UT-Hybrid-D

Fingerprint

Dive into the research topics of 'Function Approximation by Deep Neural Networks with Parameters $$\{0,\pm \frac{1}{2}, \pm 1, 2\}$$'. Together they form a unique fingerprint.

Cite this