Abstract
This paper approaches the interaction of a health professional with an AI system for diagnostic purposes as a hybrid decision making process and conceptualizes epistemo-ethical constraints on this process. We argue for the importance of the understanding of the underlying machine epistemology in order to raise awareness of and facilitate realistic expectations from AI as a decision support system, both among healthcare professionals and the potential benefiters (patients). Understanding the epistemic abilities and limitations of such systems is essential if we are to integrate AI into the decision making processes in a way that takes into account its applicability boundaries. This will help to mitigate potential harm due to misjudgments and, as a result, to raise the trust—understood here as a belief in reliability of—in the AI system. We aim at a minimal requirement for AI meta-explanation which should distinguish machine epistemic processes from similar processes in human epistemology in order to avoid confusion and error in judgment and application. An informed approach to the integration of AI systems into the decision making for diagnostic purposes is crucial given its high impact on health and well-being of patients.
Original language | English |
---|---|
Article number | 22 |
Number of pages | 15 |
Journal | Ethics and information technology |
Volume | 24 |
Early online date | 19 Apr 2022 |
DOIs | |
Publication status | Published - Jun 2022 |
Keywords
- hybrid epistemology
- ethics and epistemology of AI
- fuzzy concepts
- medical AI
- AI in decision making
- UT-Hybrid-D