DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning

Research output: Working paperPreprintAcademic

3 Downloads (Pure)

Abstract

Neural networks are prone to learn easy solutions from superficial statistics in the data, namely shortcut learning, which impairs generalization and robustness of models. We propose a data augmentation strategy, named DFM-X, that leverages knowledge about frequency shortcuts, encoded in Dominant Frequencies Maps computed for image classification models. We randomly select X% training images of certain classes for augmentation, and process them by retaining the frequencies included in the DFMs of other classes. This strategy compels the models to leverage a broader range of frequencies for classification, rather than relying on specific frequency sets. Thus, the models learn more deep and task-related semantics compared to their counterpart trained with standard setups. Unlike other commonly used augmentation techniques which focus on increasing the visual variations of training data, our method targets exploiting the original data efficiently, by distilling prior knowledge about destructive learning behavior of models from data. Our experimental results demonstrate that DFM-X improves robustness against common corruptions and adversarial attacks. It can be seamlessly integrated with other augmentation techniques to further enhance the robustness of models.
Original languageEnglish
PublisherArXiv.org
Number of pages10
DOIs
Publication statusPublished - 12 Aug 2023

Fingerprint

Dive into the research topics of 'DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning'. Together they form a unique fingerprint.
  • DFM-X: Augmentation by Leveraging Prior Knowledge of Shortcut Learning

    Wang, S., Brune, C., Veldhuis, R. & Strisciuglio, N., 6 Oct 2023, 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Piscataway, NJ: IEEE, p. 129-138 10 p. 10350684. (Proceedings IEEE/CVF International Conference on Computer Vision Workshops (ICCVW); vol. 2023).

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    1 Citation (Scopus)

Cite this