Abstract
While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, their intransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios. The BagNet architecture was designed to learn visual features that are easier to explain than the feature representation of other convolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providing rich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a data set of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmap interpretability score (HI score) to quantify model interpretability and present a user study to examine BagNet interpretability from user perspective. Our results show that BagNet is by far the most interpretable CNN architecture in our experiment setup based on the HI score.
Original language | English |
---|---|
Title of host publication | 12th International Conference on Machine Vision, ICMV 2019 |
Editors | Wolfgang Osten, Dmitry Nikolaev, Jianhong Zhou |
Publisher | SPIE Press |
ISBN (Electronic) | 9781510636439 |
DOIs | |
Publication status | Published - 31 Jan 2020 |
Event | 12th International Conference on Machine Vision, ICMV 2019 - Mercure Hotel Amsterdam City, Amsterdam, Netherlands Duration: 16 Nov 2019 → 18 Nov 2019 Conference number: 12 http://icmv.org |
Publication series
Name | Proceedings of SPIE - The International Society for Optical Engineering |
---|---|
Volume | 11433 |
ISSN (Print) | 0277-786X |
ISSN (Electronic) | 1996-756X |
Conference
Conference | 12th International Conference on Machine Vision, ICMV 2019 |
---|---|
Abbreviated title | ICMV |
Country/Territory | Netherlands |
City | Amsterdam |
Period | 16/11/19 → 18/11/19 |
Internet address |
Keywords
- Explainable AI
- Interpretable CNN
- Quantifying model interpretabilty
- Sketch classification