The Impact of Anthropomorphic Cues and Explanations on Trust Formation, Violation, and Repair in HRI: Insights from a VR Experiment

Esther S. Kox*, M. Birna van Riemsdijk, Peter W. de Vries, José H. Kerstholt

*Corresponding author for this work

Research output: Working paperDiscussion paper

Abstract

Trust violations in HRI are inevitable, making strategies to repair trust essential for maintaining appropriate levels of trust. Research suggests that human-like cues in a robot’s design and communication style can affect people’s responses to trust violations and repair efforts. This study investigates how the presence of subtle anthropomorphic cues influences the formation, violation and repair of human-robot trust in the context of an ability-based trust violation, using an explanation as a trust repair strategy. This paper presents findings from an experiment (n=54) where participants performed two military house-search missions in Virtual Reality (VR), using a VR-Locomotion 360 treadmill (Cybercity Virtualizer ELITE 2). A 2 (agent type: human-like vs. machine-like) x 2 (explanation: present vs. absent) mixed factorial design was used, with repeatedly measured self-reported trust in the agent (prior, violated, final) as the dependent variable. Results indicate that, although the communication style manipulation was subtle, participants perceived the human-like robot as significantly more human-like than the machine-like robot. However, neither the anthropomorphic cues nor the presence of an explanation had a significant effect on trust development. Finally, we discuss the methodological advantages and challenges of using VR for HRI trust research.
Original languageEnglish
PublisherOSF
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'The Impact of Anthropomorphic Cues and Explanations on Trust Formation, Violation, and Repair in HRI: Insights from a VR Experiment'. Together they form a unique fingerprint.

Cite this