Evaluating CNN interpretability on sketch classification

Abraham Theodorus, Meike Nauta, Christin Seifert

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)
204 Downloads (Pure)

Abstract

While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, their intransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios. The BagNet architecture was designed to learn visual features that are easier to explain than the feature representation of other convolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providing rich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a data set of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmap interpretability score (HI score) to quantify model interpretability and present a user study to examine BagNet interpretability from user perspective. Our results show that BagNet is by far the most interpretable CNN architecture in our experiment setup based on the HI score.

Original languageEnglish
Title of host publication12th International Conference on Machine Vision, ICMV 2019
EditorsWolfgang Osten, Dmitry Nikolaev, Jianhong Zhou
PublisherSPIE Press
ISBN (Electronic)9781510636439
DOIs
Publication statusPublished - 31 Jan 2020
Event12th International Conference on Machine Vision, ICMV 2019 - Mercure Hotel Amsterdam City, Amsterdam, Netherlands
Duration: 16 Nov 201918 Nov 2019
Conference number: 12
http://icmv.org

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume11433
ISSN (Print)0277-786X
ISSN (Electronic)1996-756X

Conference

Conference12th International Conference on Machine Vision, ICMV 2019
Abbreviated titleICMV
Country/TerritoryNetherlands
CityAmsterdam
Period16/11/1918/11/19
Internet address

Keywords

  • Explainable AI
  • Interpretable CNN
  • Quantifying model interpretabilty
  • Sketch classification

Fingerprint

Dive into the research topics of 'Evaluating CNN interpretability on sketch classification'. Together they form a unique fingerprint.

Cite this