On Generalization Bounds for Deep Networks Based on Loss Surface Implicit Regularization

Masaaki Imaizumi*, Anselm Johannes Schmidt-Hieber

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
51 Downloads (Pure)

Abstract

The classical statistical learning theory implies that fitting too many parameters leads to overfitting and poor performance. That modern deep neural networks generalize well despite a large number of parameters contradicts this finding and constitutes a major unsolved problem towards explaining the success of deep learning. While previous work focuses on the implicit regularization induced by stochastic gradient descent (SGD), we study here how the local geometry of the energy landscape around local minima affects the statistical properties of SGD with Gaussian gradient noise. We argue that under reasonable assumptions, the local geometry forces SGD to stay
close to a low dimensional subspace and that this induces another form of implicit regularization and results in tighter bounds on the generalization error for deep neural networks. To derive generalization error bounds for neural networks,
we first introduce a notion of stagnation sets around the local minima and impose a local essential convexity property of the population risk. Under these conditions, lower bounds for SGD to remain in these stagnation sets are derived. If stagnation occurs,
we derive a bound on the generalization error of deep neural networks involving the spectral norms of the weight matrices but not the number of network parameters. Technically, our proofs are based on controlling the change of parameter values in the
SGD iterates and local uniform convergence of the empirical loss functions based on the entropy of suitable neighborhoods around local minima.
Original languageEnglish
Pages (from-to)1203- 1223
Number of pages21
JournalIEEE transactions on information theory
Volume69
Issue number2
Early online date14 Oct 2022
DOIs
Publication statusPublished - Feb 2023

Keywords

  • Deep neural networks
  • generalization error
  • uniform convergence
  • non-convex optimization

Fingerprint

Dive into the research topics of 'On Generalization Bounds for Deep Networks Based on Loss Surface Implicit Regularization'. Together they form a unique fingerprint.

Cite this