Backdoor Mitigation in Deep Neural Networks via Strategic Retraining

Akshay Dhonthi Ramesh Babu*, Ernst Moritz Hahn, Vahid Hashemi

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

18 Downloads (Pure)

Abstract

Deep Neural Networks (DNN) are becoming increasingly more important in assisted and automated driving. Using such entities which are obtained using machine learning is inevitable: tasks such as recognizing traffic signs cannot be developed reasonably using traditional software development methods. DNN however do have the problem that they are mostly black boxes and therefore hard to understand and debug. One particular problem is that they are prone to hidden backdoors. This means that the DNN misclassifies its input, because it considers properties that should not be decisive for the output. Backdoors may either be introduced by malicious attackers or by inappropriate training. In any case, detecting and removing them is important in the automotive area, as they might lead to safety violations with potentially severe consequences. In this paper, we introduce a novel method to remove backdoors. Our method works for both intentional as well as unintentional backdoors. We also do not require prior knowledge about the shape or distribution of backdoors. Experimental evidence shows that our method performs well on several medium-sized examples.

Original languageEnglish
Title of host publicationFormal Methods - 25th International Symposium, FM 2023, Proceedings
EditorsMarsha Chechik, Joost-Pieter Katoen, Martin Leucker
PublisherSpringer
Pages635-647
Number of pages13
ISBN (Print)9783031274800
DOIs
Publication statusPublished - 3 Mar 2023
Event25th International Symposium on Formal Methods, FM 2023 - Lübeck, Germany
Duration: 6 Mar 202310 Mar 2023
Conference number: 25

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14000 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference25th International Symposium on Formal Methods, FM 2023
Abbreviated titleFM
Country/TerritoryGermany
CityLübeck
Period6/03/2310/03/23

Keywords

  • 2024 OA procedure
  • Backdoor mitigation
  • Neural networks
  • Security testing
  • Adversarial attacks

Fingerprint

Dive into the research topics of 'Backdoor Mitigation in Deep Neural Networks via Strategic Retraining'. Together they form a unique fingerprint.

Cite this