Abstract
The field of explainable artificial intelligence (XAI) aims to increase the transparency of AI models by providing explanations for their reasoning processes. Valuable efforts have led to an increase in transparency. However, there are still blind spots in literature, specifically related to the use of XAI techniques in practice. To make the development of AI models truly explainable, transparency is required in each stage of the Machine Learning Operations (MLOps) workflow: data preparation, model development and model deployment. This research aims to mitigate issues in each stage, using case studies from the industry and health domains. The final objective is to provide an application-oriented methodological framework for the development of more transparent AI approaches.
Original language | English |
---|---|
Title of host publication | xAI-2024 Late-breaking Work, Demos and Doctoral Consortium Joint Proceedings |
Pages | 385-392 |
Number of pages | 8 |
Volume | 3793 |
Publication status | Published - Jul 2024 |
Event | 2nd World Conference on Explainable Artificial Intelligence, xAI 2024 - Valletta, Malta Duration: 17 Jul 2024 → 19 Jul 2024 https://xaiworldconference.com/2024/ |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
ISSN (Print) | 1613-0073 |
Conference
Conference | 2nd World Conference on Explainable Artificial Intelligence, xAI 2024 |
---|---|
Country/Territory | Malta |
City | Valletta |
Period | 17/07/24 → 19/07/24 |
Internet address |