Hebbian learning inspired estimation of the linear regression parameters from queries

Research output: Working paperPreprintAcademic

2 Downloads (Pure)

Abstract

Local learning rules in biological neural networks (BNNs) are commonly referred to as Hebbian learning. [26] links a biologically motivated Hebbian learning rule to a specific zeroth-order optimization method. In this work, we study a variation of this Hebbian learning rule to recover the regression vector in the linear regression model. Zeroth-order optimization methods are known to converge with suboptimal rate for large parameter dimension compared to first-order methods like gradient descent, and are therefore thought to be in general inferior. By establishing upper and lower bounds, we show, however, that such methods achieve near-optimal rates if only queries of the linear regression loss are available. Moreover, we prove that this Hebbian learning rule can achieve considerably faster rates than any non-adaptive method that selects the queries independently of the data.
Original languageEnglish
PublisherArXiv.org
Number of pages34
DOIs
Publication statusPublished - 26 Sept 2023

Keywords

  • math.ST
  • cs.LG
  • cs.NE
  • stat.TH
  • Primary: 62L20, secondary: 62J05

Fingerprint

Dive into the research topics of 'Hebbian learning inspired estimation of the linear regression parameters from queries'. Together they form a unique fingerprint.

Cite this