Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes

Christian Haase, Christoph Hertrich, Georg Loho

Research output: Contribution to conferencePaperpeer-review

9 Citations (Scopus)
30 Downloads (Pure)

Abstract

We prove that the set of functions representable by ReLU neural networks with integer weights strictly increases with the network depth while allowing arbitrary width. More precisely, we show that dlog2(n)e hidden layers are indeed necessary to compute the maximum of n numbers, matching known upper bounds. Our results are based on the known duality between neural networks and Newton polytopes via tropical geometry. The integrality assumption implies that these Newton polytopes are lattice polytopes. Then, our depth lower bounds follow from a parity argument on the normalized volume of faces of such polytopes.

Original languageEnglish
Number of pages13
Publication statusPublished - 2023
Event11th International Conference on Learning Representations, ICLR 2023 - Kigali, Rwanda
Duration: 1 May 20235 May 2023
Conference number: 11

Conference

Conference11th International Conference on Learning Representations, ICLR 2023
Abbreviated titleICLR 2023
Country/TerritoryRwanda
CityKigali
Period1/05/235/05/23

Fingerprint

Dive into the research topics of 'Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes'. Together they form a unique fingerprint.

Cite this