Evaluating CNN Interpretabilty on Sketch Classification

Abraham Theodorus, Meike Nauta, Christin Seifert

    Research output: Contribution to conferencePaper

    75 Downloads (Pure)

    Abstract

    While deep neural networks (DNNs) have been shown to outperform humans on many vision tasks, their intransparent decision making process inhibits wide-spread uptake, especially in high-risk scenarios.
    The BagNet architecture was designed to learn visual features that are easier to explain than the feature representation of other convolutional neural networks (CNNs). Previous experiments with BagNet were focused on natural images providing rich texture and color information. In this paper, we investigate the performance and interpretability of BagNet on a data set of human sketches, i.e., a data set with limited color and no texture information. We also introduce a heatmap interpretability score (HI score) to quantify model interpretability and present a user study to examine BagNet interpretability from user perspective.
    Our results show that BagNet is by far the most interpretable CNN architecture in our experiment setup based on the HI score.
    Original languageEnglish
    Publication statusPublished - 2019
    Event12th International Conference on Machine Vision, ICMV 2019 - Mercure Hotel Amsterdam City, Amsterdam, Netherlands
    Duration: 16 Nov 201918 Nov 2019
    Conference number: 12
    http://icmv.org

    Conference

    Conference12th International Conference on Machine Vision, ICMV 2019
    Abbreviated titleICMV
    CountryNetherlands
    CityAmsterdam
    Period16/11/1918/11/19
    Internet address

      Fingerprint

    Cite this

    Theodorus, A., Nauta, M., & Seifert, C. (2019). Evaluating CNN Interpretabilty on Sketch Classification. Paper presented at 12th International Conference on Machine Vision, ICMV 2019, Amsterdam, Netherlands.