A Survey on the Robustness of Computer Vision Models against Common Corruptions

Research output: Working paperPreprintAcademic

15 Downloads (Pure)

Abstract

The performance of computer vision models are susceptible to unexpected changes in input images, known as common corruptions (e.g. noise, blur, illumination changes, etc.), that can hinder their reliability when deployed in real scenarios. These corruptions are not always considered to test model generalization and robustness. In this survey, we present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions. We categorize methods into four groups based on the model part and training method addressed: data augmentation, representation learning, knowledge distillation, and network components. We also cover indirect methods for generalization and mitigation of shortcut learning, potentially useful for corruption robustness. We release a unified benchmark framework to compare robustness performance on several datasets, and address the inconsistencies of evaluation in the literature. We provide an experimental overview of the base corruption robustness of popular vision backbones, and show that corruption robustness does not necessarily scale with model size. The very large models (above 100M parameters) gain negligible robustness, considering the increased computational requirements. To achieve generalizable and robust computer vision models, we foresee the need of developing new learning strategies to efficiently exploit limited data and mitigate unwanted or unreliable learning behaviors.
Original languageEnglish
PublisherArXiv.org
DOIs
Publication statusPublished - 10 May 2023

Keywords

  • cs.CV

Fingerprint

Dive into the research topics of 'A Survey on the Robustness of Computer Vision Models against Common Corruptions'. Together they form a unique fingerprint.

Cite this