Trust Development in Military and Civilian Human–Agent Teams: The Effect of Social-Cognitive Recovery Strategies

E. S. Kox*, L. B. Siegling, J. H. Kerstholt

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
4 Downloads (Pure)


Autonomous agents (AA) will increasingly be deployed as teammates instead of tools. In many operational situations, flawless performance from AA cannot be guaranteed. This may lead to a breach in the human’s trust, which can compromise collaboration. This highlights the importance of thinking about how to deal with error and trust violations when designing AA. The aim of this study was to explore the influence of uncertainty communication and apology on the development of trust in a Human–Agent Team (HAT) when there is a trust violation. Two experimental studies following the same method were performed with (I) a civilian group and (II) a military group of participants. The online task environment resembled a house search in which the participant was accompanied and advised by an AA as their artificial team member. Halfway during the task, an incorrect advice evoked a trust violation. Uncertainty communication was manipulated within-subjects, apology between-subjects. Our results showed that (a) communicating uncertainty led to higher levels of trust in both studies, (b) an incorrect advice by the agent led to a less severe decline in trust when that advice included a measure of uncertainty, and (c) after a trust violation, trust recovered significantly more when the agent offered an apology. The two latter effects were only found in the civilian study. We conclude that tailored agent communication is a key factor in minimizing trust reduction in face of agent failure to maintain effective long-term relationships in HATs. The difference in findings between participant groups emphasizes the importance of considering the (organizational) culture when designing artificial team members.

Original languageEnglish
Pages (from-to)1323-1338
Number of pages16
JournalInternational journal of social robotics
Issue number5
Publication statusPublished - Jul 2022


  • Autonomous agents
  • Human–agent teaming
  • Individual differences
  • Transparency
  • Trust
  • Trust repair
  • User-centered design
  • UT-Hybrid-D


Dive into the research topics of 'Trust Development in Military and Civilian Human–Agent Teams: The Effect of Social-Cognitive Recovery Strategies'. Together they form a unique fingerprint.

Cite this