Abstract
Many decisions in life involve tradeoffs: To gain something, one often has to lose something in return. As robots become more autonomous, their decisions will extend beyond mere assessments (e.g., detecting a threat) to making choices (e.g., taking the faster or the safer route). The aim of the current research was to study perceived trustworthiness in scenarios involving adverse consequences due to (1) an assessment error versus (2) a choice. Perceived trustworthiness (ability, benevolence, integrity) was measured repeatedly during a computer task simulating a military mission. Participants teamed with either a virtual human or a robotic partner who led the way and warned for potential danger. After encountering a hazard, the partner explained that it (1) failed to detect the threat (error) or (2) prioritized the mission and chose the fastest route despite the risk (choice). Results showed that: (a) the error-explanation repaired all trustworthiness dimensions, (b) the choice-explanation only repaired perceptions of ability, not benevolence or integrity, (c) no differences were found between human and robotic partners. Our findings suggest that trust violations due to choices are harder to repair than those due to errors. Implications and future research directions are discussed.
| Original language | English |
|---|---|
| Article number | 75 |
| Pages (from-to) | 1-27 |
| Journal | ACM Transactions on Human-Robot Interaction |
| Volume | 14 |
| Issue number | 4 |
| Early online date | 20 Aug 2025 |
| DOIs | |
| Publication status | Published - 31 Dec 2025 |
Keywords
- UT-Hybrid-D
- Trust
- Trust violations
- Trust repair
- Error
- Choice
- Human-Robot Interaction (HRI)