Playing to distraction: towards a robust training of CNN classifiers through visual explanation techniques

David Morales*, Estefanía Talavera, Beatriz Remeseiro

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
6 Downloads (Pure)


The field of deep learning is evolving in different directions, with still the need for more efficient training strategies. In this work, we present a novel and robust training scheme that integrates visual explanation techniques in the learning process. Unlike the attention mechanisms that focus on the relevant parts of images, we aim to improve the robustness of the model by making it pay attention to other regions as well. Broadly speaking, the idea is to distract the classifier in the learning process by forcing it to focus not only on relevant regions but also on those that, a priori, are not so informative for the discrimination of the class. We tested the proposed approach by embedding it into the learning process of a convolutional neural network for the analysis and classification of two well-known datasets, namely Stanford cars and FGVC-Aircraft. Furthermore, we evaluated our model on a real-case scenario for the classification of egocentric images, allowing us to obtain relevant information about peoples’ lifestyles. In particular, we work on the challenging EgoFoodPlaces dataset, achieving state-of-the-art results with a lower level of complexity. The results obtained indicate the suitability of our proposed training scheme for image classification, improving the robustness of the final model.
Original languageEnglish
Pages (from-to)16937–16949
JournalNeural Computing and Applications
Publication statusPublished - 2021
Externally publishedYes


Dive into the research topics of 'Playing to distraction: towards a robust training of CNN classifiers through visual explanation techniques'. Together they form a unique fingerprint.

Cite this