TY - GEN
T1 - Co-optimized training of models with synaptic delays for digital neuromorphic accelerators
AU - Patiño-Saucedo, Alberto
AU - Meijer, Roy
AU - Detteter, Paul
AU - Yousefzadeh, Amirreza
AU - Garrido-Regife, Laura
AU - Linares-Barranco, Bernabé
AU - Sifalakis, Manolis
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/7/2
Y1 - 2024/7/2
N2 - Configurable delays are a basic feature in many neuromorphic neural network hardware accelerators. However, they have been rarely used in model implementations, despite their promising impact on performance and efficiency in tasks that exhibit complex dynamics, as it has been unclear how to optimize them. In this work, we propose a framework to train and deploy in digital neuromorphic hardware highly performing spiking neural networks (SNNs) where apart from the synaptic weights, the delays are also co-optimized. We consider synaptic (i.e. per-synapse) delays and evaluate them in two neuromorphic digital hardware platforms: Intel's Loihi and Imec's Seneca. Leveraging spike-based back-propagation-through-time, the training process accounts for both platform constraints, such as synaptic weight precision and the total number of parameters per core, as a function of the network size. In addition, a delay pruning technique is used to reduce memory footprint with a low cost in performance. The evaluated benchmark involves several models for solving the SHD (Spiking Heidelberg Digits) classification task, where minimal accuracy degradation during the transition from software to hardware is demonstrated. To our knowledge, this is the first work show-casing how to train and deploy hardware-aware models parameterized with synaptic delays, on multicore neuromorphic hardware accelerators.
AB - Configurable delays are a basic feature in many neuromorphic neural network hardware accelerators. However, they have been rarely used in model implementations, despite their promising impact on performance and efficiency in tasks that exhibit complex dynamics, as it has been unclear how to optimize them. In this work, we propose a framework to train and deploy in digital neuromorphic hardware highly performing spiking neural networks (SNNs) where apart from the synaptic weights, the delays are also co-optimized. We consider synaptic (i.e. per-synapse) delays and evaluate them in two neuromorphic digital hardware platforms: Intel's Loihi and Imec's Seneca. Leveraging spike-based back-propagation-through-time, the training process accounts for both platform constraints, such as synaptic weight precision and the total number of parameters per core, as a function of the network size. In addition, a delay pruning technique is used to reduce memory footprint with a low cost in performance. The evaluated benchmark involves several models for solving the SHD (Spiking Heidelberg Digits) classification task, where minimal accuracy degradation during the transition from software to hardware is demonstrated. To our knowledge, this is the first work show-casing how to train and deploy hardware-aware models parameterized with synaptic delays, on multicore neuromorphic hardware accelerators.
KW - 2024 OA procedure
KW - Spiking Neural Networks
KW - Synaptic Delays
KW - Temporal Signal Analysis
KW - Spiking Heidelberg Digits
UR - http://www.scopus.com/inward/record.url?scp=85198560159&partnerID=8YFLogxK
U2 - 10.1109/ISCAS58744.2024.10558209
DO - 10.1109/ISCAS58744.2024.10558209
M3 - Conference contribution
AN - SCOPUS:85198560159
T3 - Proceedings - IEEE International Symposium on Circuits and Systems
BT - ISCAS 2024 - IEEE International Symposium on Circuits and Systems
PB - IEEE
T2 - IEEE International Symposium on Circuits and Systems, ISCAS 2024
Y2 - 19 May 2024 through 22 May 2024
ER -