Trust Violations due to Error or Choice: The Differential Effects on Trust Repair in Human–Human and Human–Robot Interaction

  • Esther Kox*
  • , Milou Hennekens
  • , Jason Metcalfe
  • , José Kerstholt
  • *Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
22 Downloads (Pure)

Abstract

Many decisions in life involve tradeoffs: To gain something, one often has to lose something in return. As robots become more autonomous, their decisions will extend beyond mere assessments (e.g., detecting a threat) to making choices (e.g., taking the faster or the safer route). The aim of the current research was to study perceived trustworthiness in scenarios involving adverse consequences due to (1) an assessment error versus (2) a choice. Perceived trustworthiness (ability, benevolence, integrity) was measured repeatedly during a computer task simulating a military mission. Participants teamed with either a virtual human or a robotic partner who led the way and warned for potential danger. After encountering a hazard, the partner explained that it (1) failed to detect the threat (error) or (2) prioritized the mission and chose the fastest route despite the risk (choice). Results showed that: (a) the error-explanation repaired all trustworthiness dimensions, (b) the choice-explanation only repaired perceptions of ability, not benevolence or integrity, (c) no differences were found between human and robotic partners. Our findings suggest that trust violations due to choices are harder to repair than those due to errors. Implications and future research directions are discussed.
Original languageEnglish
Article number75
Pages (from-to)1-27
JournalACM Transactions on Human-Robot Interaction
Volume14
Issue number4
Early online date20 Aug 2025
DOIs
Publication statusPublished - 31 Dec 2025

Keywords

  • UT-Hybrid-D
  • Trust
  • Trust violations
  • Trust repair
  • Error
  • Choice
  • Human-Robot Interaction (HRI)

Cite this