Are we justified attributing a mistake in diagnosis to an AI diagnostic system?

Dina Babushkina*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

75 Downloads (Pure)


Responsible professional use of AI implies the readiness to respond to and address—in ethically appropriate manner—harm that may be associated with such use. This presupposes the ownership of mistakes. In this paper, I ask if a mistake in AI- enhanced decision making—such as AI-aided medical diagnosis—can be attributed to the AI system itself, and answer this question negatively. I will explore two options. If AI systems are merely tools, then we are never justified to attribute mistakes to them, because their failing does not meet rational constraints on being mistaken. If, for the sake of the argument, we assume that AI systems are not (mere) tools, then we are faced with certain challenges. The first is the burden to explain what this more-than-a-tool role of an AI system is, and to establish justificatory reasons for the AI system to be considered as such. The second is to prove that medical diagnosis can be reduced to the calculations by AI system without any significant loss to the purpose and quality of the diagnosis as a procedure. I will conclude that the problem of the ownership of mistakes in hybrid decision making necessitates new forms of epistemic responsibilities.
Original languageEnglish
Pages (from-to)567-584
Number of pages18
JournalAI and Ethics
Early online date9 Aug 2022
Publication statusPublished - May 2023


  • Automated diagnosis
  • Medical AI
  • Ethics and epistemology of AI
  • Hybrid decision making
  • Responsibility for diagnosis
  • Ownership of mistakes
  • UT-Hybrid-D


Dive into the research topics of 'Are we justified attributing a mistake in diagnosis to an AI diagnostic system?'. Together they form a unique fingerprint.

Cite this