TY - UNPB
T1 - Gradient descent in materia through homodyne gradient extraction
AU - Boon, Marcus N.
AU - Cassola, Lorenzo
AU - Euler, Hans-Christian Ruiz
AU - Chen, Tao
AU - van de Ven, Bram
AU - Ibarra, Unai Alegre
AU - Bobbert, Peter A.
AU - van der Wiel, Wilfred G.
PY - 2021/5/15
Y1 - 2021/5/15
N2 - Deep learning, a multi-layered neural network approach inspired by the brain, has revolutionized machine learning. One of its key enablers has been backpropagation, an algorithm that computes the gradient of a loss function with respect to the weights and biases in the neural network model, in combination with its use in gradient descent. However, the implementation of deep learning in digital computers is intrinsically energy hungry, with energy consumption becoming prohibitively high for many applications. This has stimulated the development of specialized hardware, ranging from neuromorphic CMOS integrated circuits and integrated photonic tensor cores to unconventional, material-based computing system. The learning process in these material systems, realized, e.g., by artificial evolution, equilibrium propagation or surrogate modelling, is a complicated and time-consuming process. Here, we demonstrate a simple yet efficient and accurate gradient extraction method, based on the principle of homodyne detection, for performing gradient descent on a loss function directly in a physical system without the need of an analytical description. By perturbing the parameters that need to be optimized using sinusoidal waveforms with distinct frequencies, we effectively obtain the gradient information in a highly robust and scalable manner. We illustrate the method in dopant network processing units, but argue that it is applicable in a wide range of physical systems. Homodyne gradient extraction can in principle be fully implemented in materia, facilitating the development of autonomously learning material systems.
AB - Deep learning, a multi-layered neural network approach inspired by the brain, has revolutionized machine learning. One of its key enablers has been backpropagation, an algorithm that computes the gradient of a loss function with respect to the weights and biases in the neural network model, in combination with its use in gradient descent. However, the implementation of deep learning in digital computers is intrinsically energy hungry, with energy consumption becoming prohibitively high for many applications. This has stimulated the development of specialized hardware, ranging from neuromorphic CMOS integrated circuits and integrated photonic tensor cores to unconventional, material-based computing system. The learning process in these material systems, realized, e.g., by artificial evolution, equilibrium propagation or surrogate modelling, is a complicated and time-consuming process. Here, we demonstrate a simple yet efficient and accurate gradient extraction method, based on the principle of homodyne detection, for performing gradient descent on a loss function directly in a physical system without the need of an analytical description. By perturbing the parameters that need to be optimized using sinusoidal waveforms with distinct frequencies, we effectively obtain the gradient information in a highly robust and scalable manner. We illustrate the method in dopant network processing units, but argue that it is applicable in a wide range of physical systems. Homodyne gradient extraction can in principle be fully implemented in materia, facilitating the development of autonomously learning material systems.
KW - cs.NE
KW - cs.ET
KW - cs.LG
U2 - 10.48550/arXiv.2105.11233
DO - 10.48550/arXiv.2105.11233
M3 - Preprint
BT - Gradient descent in materia through homodyne gradient extraction
PB - ArXiv.org
ER -