Neural Prototype Trees for Interpretable Fine-Grained Image Recognition

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Prototype-based methods use interpretable representations to address the black-box nature of deep learning models, in contrast to post-hoc explanation methods that only approximate such models. We propose the Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition. ProtoTree combines prototype learning with decision trees, and thus results in a globally interpretable model by design. Additionally, ProtoTree can locally explain a single prediction by outlining a decision path through the tree. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this learned prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it’s a hummingbird! We tune the accuracy-interpretability trade-off using ensemble methods, pruning and binarizing. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 learned prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200- 2011 and Stanford Cars data sets. Code is available at github.com/M-Nauta/ProtoTree.
Original languageEnglish
Title of host publicationProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Pages14933-14943
Number of pages11
Publication statusPublished - 1 Jun 2021

Fingerprint

Dive into the research topics of 'Neural Prototype Trees for Interpretable Fine-Grained Image Recognition'. Together they form a unique fingerprint.

Cite this