TY - JOUR
T1 - Dopant Network Processing Units
T2 - Towards Efficient Neural-network Emulators with High-capacity Nanoelectronic Nodes
AU - Ruiz Euler, Hans-Christian
AU - Alegre Ibarra, Unai
AU - van de Ven, Bram
AU - Broersma, Hajo
AU - Bobbert, Peter A.
AU - van der Wiel, Wilfred G.
PY - 2021/9/9
Y1 - 2021/9/9
N2 - The rapidly growing computational demands of deep neural networks require novel hardware designs. Recently, tuneable nanoelectronic devices were developed based on hopping electrons through a network of dopant atoms in silicon. These ‘dopant network processing units’ (DNPUs) are highly energy-efficient and have potentially very high throughput. By adapting the controlvoltages applied to its electrodes, a single DNPU can solve a variety of linearly non-separable classification problems. However, using a single device has limitations due to the implicit single-node architecture. This paper presents a promising novel approach to neural information processing by introducing DNPUs as high-capacity neurons and moving from a single to a multi-neuron framework. By implementing and testing a small multi-DNPU classifier in hardware, we show that feed-forward DNPU networks improve the performance of a single DNPU from 77% to 94% test accuracy on a binary classification task with concentric classes on a plane. Furthermore, motivated by the integration of DNPUs with memristor crossbar arrays, we study the potential of using DNPUs in combination with linear layers. We show by simulation that an MNIST classifier with only 10 DNPU nodes achieves over 96% test accuracy. Our results pave the road towards hardware neural network emulators that offer atomic-scale information processing with low latency and energy consumption.
AB - The rapidly growing computational demands of deep neural networks require novel hardware designs. Recently, tuneable nanoelectronic devices were developed based on hopping electrons through a network of dopant atoms in silicon. These ‘dopant network processing units’ (DNPUs) are highly energy-efficient and have potentially very high throughput. By adapting the controlvoltages applied to its electrodes, a single DNPU can solve a variety of linearly non-separable classification problems. However, using a single device has limitations due to the implicit single-node architecture. This paper presents a promising novel approach to neural information processing by introducing DNPUs as high-capacity neurons and moving from a single to a multi-neuron framework. By implementing and testing a small multi-DNPU classifier in hardware, we show that feed-forward DNPU networks improve the performance of a single DNPU from 77% to 94% test accuracy on a binary classification task with concentric classes on a plane. Furthermore, motivated by the integration of DNPUs with memristor crossbar arrays, we study the potential of using DNPUs in combination with linear layers. We show by simulation that an MNIST classifier with only 10 DNPU nodes achieves over 96% test accuracy. Our results pave the road towards hardware neural network emulators that offer atomic-scale information processing with low latency and energy consumption.
U2 - 10.1088/2634-4386/ac1a7f
DO - 10.1088/2634-4386/ac1a7f
M3 - Article
SN - 2634-4386
VL - 1
JO - Neuromorphic Computing and Engineering
JF - Neuromorphic Computing and Engineering
IS - 2
M1 - 024002
ER -