Feature Attribution Explanations for Spiking Neural Networks

Elisa Nguyen, Meike Nauta, Gwenn Englebienne, Christin Seifert

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Third-generation artificial neural networks, Spiking Neural Networks (SNNs), can be efficiently implemented on hardware. Their implementation on neuromorphic chips opens a broad range of applications, such as machine learning-based autonomous control and intelligent biomedical devices. In critical applications, however, insight into the reasoning of SNNs is important, thus SNNs need to be equipped with the ability to explain how decisions are reached. We present Temporal Spike Attribution (TSA), a local explanation method for SNNs. To compute the explanation, we aggregate all information available in model-internal variables: spike times and model weights. We evaluate TSA on artificial and real-world time series data and measure explanation quality w.r.t. multiple quantitative criteria. We find that TSA correctly identifies a small subset of input features relevant to the decision (i.e., is output-complete and compact) and generates similar explanations for similar inputs (i.e., is continuous). Further, our experiments show that incorporating the notion of absent spikes improves explanation quality. Our work can serve as a starting point for explainable SNNs, with future implementations on hardware yielding not only predictions but also explanations in a broad range of application scenarios. Source code is available at https://github.com/ElisaNguyen/tsa-explanations.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE 5th International Conference on Cognitive Machine Intelligence, CogMI 2023
PublisherIEEE
Pages59-68
Number of pages10
ISBN (Electronic)979-8-3503-2383-2
DOIs
Publication statusPublished - 19 Feb 2023
Event5th IEEE International Conference on Cognitive Machine Intelligence, CogMI 2023 - Atlanta, United States
Duration: 1 Nov 20233 Nov 2023
Conference number: 5

Conference

Conference5th IEEE International Conference on Cognitive Machine Intelligence, CogMI 2023
Abbreviated titleCogMI
Country/TerritoryUnited States
CityAtlanta
Period1/11/233/11/23

Keywords

  • 2024 OA procedure
  • feature attribution
  • spiking neural network
  • Explainability

Fingerprint

Dive into the research topics of 'Feature Attribution Explanations for Spiking Neural Networks'. Together they form a unique fingerprint.

Cite this