It’s Complicated: The Relationship between User Trust, Model Accuracy and Explanations in AI

Andrea Papenmeier, Dagmar Kern, Gwenn Englebienne, Christin Seifert

Research output: Contribution to journalArticleAcademicpeer-review

45 Citations (Scopus)
53 Downloads (Pure)

Abstract

Automated decision-making systems become increasingly powerful due to higher model complexity. While powerful in prediction accuracy, Deep Learning models are black boxes by nature, preventing users from making informed judgments about the correctness and fairness of such an automated system. Explanations have been proposed as a general remedy to the black box problem. However, it remains unclear if effects of explanations on user trust generalise over varying accuracy levels. In an online user study with 959 participants, we examined the practical consequences of adding explanations for user trust: We evaluated trust for three explanation types on three classifiers of varying accuracy. We find that the influence of our explanations on trust differs depending on the classifier’s accuracy. Thus, the interplay between trust and explanations is more complex than previously reported. Our findings also reveal discrepancies between self-reported and behavioural trust, showing that the choice of trust measure impacts the results.
Original languageEnglish
Pages (from-to)1-33
JournalACM Transactions on Computer-Human Interaction
Volume29
Issue number4
DOIs
Publication statusPublished - 31 Mar 2022

Keywords

  • UT-Hybrid-D

Fingerprint

Dive into the research topics of 'It’s Complicated: The Relationship between User Trust, Model Accuracy and Explanations in AI'. Together they form a unique fingerprint.

Cite this