Evaluating CNN Interpretabilty on Sketch Classification

Abraham Theodorus, Meike Nauta, Christin Seifert

Research output: Contribution to conferencePaperAcademicpeer-review

24 Downloads (Pure)

Abstract

While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, their intransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios.
The BagNet architecture was designed to learn visual features that are easier to explain than the feature representation of other convolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providing rich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a data set of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmap interpretability score (HI score) to quantify model interpretability and present a user study to examine BagNet interpretability from user perspective.
Our results show that BagNet is by far the most interpretable CNN architecture in our experiment setup based on the HI score.
Original languageEnglish
Publication statusPublished - 2019
Event12th International Conference on Machine Vision, ICMV 2019 - Mercure Hotel Amsterdam City, Amsterdam, Netherlands
Duration: 16 Nov 201918 Nov 2019
Conference number: 12
http://icmv.org

Conference

Conference12th International Conference on Machine Vision, ICMV 2019
Abbreviated titleICMV
CountryNetherlands
CityAmsterdam
Period16/11/1918/11/19
Internet address

Fingerprint

Textures
Color
Neural networks
Network architecture
Decision making
Experiments
Deep neural networks

Cite this

Theodorus, A., Nauta, M., & Seifert, C. (2019). Evaluating CNN Interpretabilty on Sketch Classification. Paper presented at 12th International Conference on Machine Vision, ICMV 2019, Amsterdam, Netherlands.
Theodorus, Abraham ; Nauta, Meike ; Seifert, Christin . / Evaluating CNN Interpretabilty on Sketch Classification. Paper presented at 12th International Conference on Machine Vision, ICMV 2019, Amsterdam, Netherlands.
@conference{3c47b8eca77843b58dacfb20be0d67a5,
title = "Evaluating CNN Interpretabilty on Sketch Classification",
abstract = "While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, their intransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios.The BagNet architecture was designed to learn visual features that are easier to explain than the feature representation of other convolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providing rich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a data set of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmap interpretability score (HI score) to quantify model interpretability and present a user study to examine BagNet interpretability from user perspective.Our results show that BagNet is by far the most interpretable CNN architecture in our experiment setup based on the HI score.",
author = "Abraham Theodorus and Meike Nauta and Christin Seifert",
year = "2019",
language = "English",
note = "12th International Conference on Machine Vision, ICMV 2019, ICMV ; Conference date: 16-11-2019 Through 18-11-2019",
url = "http://icmv.org",

}

Theodorus, A, Nauta, M & Seifert, C 2019, 'Evaluating CNN Interpretabilty on Sketch Classification' Paper presented at 12th International Conference on Machine Vision, ICMV 2019, Amsterdam, Netherlands, 16/11/19 - 18/11/19, .

Evaluating CNN Interpretabilty on Sketch Classification. / Theodorus, Abraham; Nauta, Meike ; Seifert, Christin .

2019. Paper presented at 12th International Conference on Machine Vision, ICMV 2019, Amsterdam, Netherlands.

Research output: Contribution to conferencePaperAcademicpeer-review

TY - CONF

T1 - Evaluating CNN Interpretabilty on Sketch Classification

AU - Theodorus, Abraham

AU - Nauta, Meike

AU - Seifert, Christin

PY - 2019

Y1 - 2019

N2 - While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, their intransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios.The BagNet architecture was designed to learn visual features that are easier to explain than the feature representation of other convolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providing rich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a data set of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmap interpretability score (HI score) to quantify model interpretability and present a user study to examine BagNet interpretability from user perspective.Our results show that BagNet is by far the most interpretable CNN architecture in our experiment setup based on the HI score.

AB - While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, their intransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios.The BagNet architecture was designed to learn visual features that are easier to explain than the feature representation of other convolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providing rich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a data set of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmap interpretability score (HI score) to quantify model interpretability and present a user study to examine BagNet interpretability from user perspective.Our results show that BagNet is by far the most interpretable CNN architecture in our experiment setup based on the HI score.

M3 - Paper

ER -

Theodorus A, Nauta M, Seifert C. Evaluating CNN Interpretabilty on Sketch Classification. 2019. Paper presented at 12th International Conference on Machine Vision, ICMV 2019, Amsterdam, Netherlands.