Abstract
Lexical alignment is a form of personalization frequently found in human-human conversations. Recently, attempts have been made to incorporate it in human-computer conversations. We describe an experiment to investigate the trust of users in the performance of a conversational agent that lexically aligns or misaligns, in a collaborative task. The participants performed a travel planning task with the help of the agent, involving rescuing residents and minimizing the travel path on a fictional map. We found that trust in the conversational agent was not significantly affected by the alignment capability of the agent.
Original language | English |
---|---|
Title of host publication | UMAP Adjunct '24: Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization |
Publisher | Association for Computing Machinery |
Pages | 189-194 |
Number of pages | 6 |
ISBN (Electronic) | 979-8-4007-0466-6 |
DOIs | |
Publication status | Published - 28 Jun 2024 |
Event | 32nd ACM Conference on User Modeling, Adaptation and Personalization, ACM UMAP 2024 - Cagliari, Italy Duration: 1 Jul 2024 → 4 Jul 2024 Conference number: 32 https://www.um.org/umap2024/ |
Conference
Conference | 32nd ACM Conference on User Modeling, Adaptation and Personalization, ACM UMAP 2024 |
---|---|
Abbreviated title | ACM UMAP 2024 |
Country/Territory | Italy |
City | Cagliari |
Period | 1/07/24 → 4/07/24 |
Internet address |
Keywords
- lexical alignment
- human-agent interaction
- conversational agents
- performance trust