Trust in a Human-Computer Collaborative Task With or Without Lexical Alignment

Sumit Srivastava, Mariet Theune, Alejandro Catalá, Chris Reed

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

30 Downloads (Pure)

Abstract

Lexical alignment is a form of personalization frequently found in human-human conversations. Recently, attempts have been made to incorporate it in human-computer conversations. We describe an experiment to investigate the trust of users in the performance of a conversational agent that lexically aligns or misaligns, in a collaborative task. The participants performed a travel planning task with the help of the agent, involving rescuing residents and minimizing the travel path on a fictional map. We found that trust in the conversational agent was not significantly affected by the alignment capability of the agent.
Original languageEnglish
Title of host publicationUMAP Adjunct '24: Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization
PublisherAssociation for Computing Machinery
Pages189-194
Number of pages6
ISBN (Electronic)979-8-4007-0466-6
DOIs
Publication statusPublished - 28 Jun 2024
Event32nd ACM Conference on User Modeling, Adaptation and Personalization, ACM UMAP 2024 - Cagliari, Italy
Duration: 1 Jul 20244 Jul 2024
Conference number: 32
https://www.um.org/umap2024/

Conference

Conference32nd ACM Conference on User Modeling, Adaptation and Personalization, ACM UMAP 2024
Abbreviated titleACM UMAP 2024
Country/TerritoryItaly
CityCagliari
Period1/07/244/07/24
Internet address

Keywords

  • lexical alignment
  • human-agent interaction
  • conversational agents
  • performance trust

Fingerprint

Dive into the research topics of 'Trust in a Human-Computer Collaborative Task With or Without Lexical Alignment'. Together they form a unique fingerprint.

Cite this