Abstract
This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in Explainable AI and in the development of human-centric explainable systems and intelligible explanations. We consider three main perspectives in which ontologies can contribute significantly, namely reference modelling, common-sense reasoning, and knowledge refinement and complexity management. We overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The paper concludes by discussing what challenges still need to be addressed to enable ontology-based approaches to explanation and to evaluate their human-understandability and effectiveness.
| Original language | English |
|---|---|
| Publisher | ArXiv.org |
| Number of pages | 14 |
| DOIs | |
| Publication status | Published - 8 Nov 2023 |
Keywords
- cs.AI
- I.2.6
Fingerprint
Dive into the research topics of 'On the Multiple Roles of Ontologies in Explainable AI'. Together they form a unique fingerprint.Research output
- 1 Article
-
On the Multiple Roles of Ontologies in Explainable AI
Confalonieri, R. & Guizzardi, G., 15 Oct 2023, (E-pub ahead of print/First online) In: Neurosymbolic Artificial Intelligence. 14 p.Research output: Contribution to journal › Article › Professional
Open AccessFile
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver