brains-py, A framework to support research on energy-efficient unconventional hardware for machine learning

  • Unai Alegre-Ibarra (Creator)
  • Hans-Christian Ruiz Euler (Creator)
  • Humaid Mollah (Creator)
  • Bozhidar P. Petrov (Creator)
  • Srikumar S. Sastry (Creator)
  • Marcus Boon (Creator)
  • Michel P. de Jong (Creator)
  • Mohamadreza Zolfagharinejad (Creator)
  • Florentina M.J. Uitzetter (Creator)
  • Bram van de Ven (Creator)
  • Antonio Sousa de Almeida (Creator)
  • Sachin Kinge (Creator)
  • Wilfred G. van der Wiel (Creator)



Projections about the limitations of digital computers for deep learning models are leading to a shift towards domain-specific hardware, where novel analogue components are sought after, due to their potential advantages in power consumption. This paper introduces brains-py, a generic framework to facilitate research on different sorts of disordered nano-material networks for natural and energy-efficient analogue computing. Mainly, it has been applied to the concept of dopant network processing units (DNPUs), a novel and promising CMOS-compatible nano-scale tunable system based on doped silicon with potentially very low-power consumption at the inference stage. The framework focuses on two material-learning-based approaches, for training DNPUs to compute supervised learning tasks: evolution-in-matter and surrogate models. While evolution-in-matter focuses on providing a quick exploration of newly manufactured single DNPUs, the surrogate model approach is used for the design and simulation of the interconnection between multiple DNPUs, enabling the exploration of their scalability. All simulation results can be seamlessly validated on hardware, saving time and costs associated with their reproduction. The framework is generic and can be reused for research on various materials with different design aspects, providing support for the most common tasks required for doing experiments with these novel materials.
Date made available5 Oct 2023

Cite this