Abstract
Local learning rules in biological neural networks (BNNs) are commonly referred to as Hebbian learning. [26] links a biologically motivated Hebbian learning rule to a specific zeroth-order optimization method. In this work, we study a variation of this Hebbian learning rule to recover the regression vector in the linear regression model. Zeroth-order optimization methods are known to converge with suboptimal rate for large parameter dimension compared to first-order methods like gradient descent, and are therefore thought to be in general inferior. By establishing upper and lower bounds, we show, however, that such methods achieve near-optimal rates if only queries of the linear regression loss are available. Moreover, we prove that this Hebbian learning rule can achieve considerably faster rates than any non-adaptive method that selects the queries independently of the data.
Original language | English |
---|---|
Publisher | ArXiv.org |
Number of pages | 34 |
DOIs | |
Publication status | Published - 26 Sept 2023 |
Keywords
- math.ST
- cs.LG
- cs.NE
- stat.TH
- Primary: 62L20, secondary: 62J05