Abstract
In multi-agent, multi-user environments, users as well as agents should have a means of establishing who is talking to whom. In this paper, we present an experiment aimed at evaluating whether gaze directional cues of users could be used for this purpose. Using an eye tracker, we measured subject gaze at the faces of conversational partners during four-person conversations. Results indicate that when someone is listening or speaking to individuals, there is indeed a high probability that the person looked at is the person listened (p=88%) or spoken to (p=77%). We conclude that gaze is an excellent predictor of conversational attention in multiparty conversations. As such, it may form a reliable source of input for conversational systems that need to establish whom the user is speaking or listening to. We implemented our findings in FRED, a multi-agent conversational system that uses eye input to gauge which agent the user is listening or speaking to.
Original language | English |
---|---|
Title of host publication | CHI 2001: Anyone, Anywhere |
Subtitle of host publication | CHI 2001 conference proceedings |
Editors | J. Jacko, A. Sears, M. Beaudouin-Lafon, R.J.K. Jacob |
Place of Publication | New York, NY |
Publisher | ACM Press |
Pages | 301-308 |
Number of pages | 8 |
ISBN (Print) | 9781581133271 |
DOIs | |
Publication status | Published - 31 Mar 2001 |
Event | 2001 SIGCHI Conference on Human Factors in Computing Systems, CHI 2001: Anyone, Anywhere - Seattle, United States Duration: 31 Mar 2001 → 5 Apr 2001 |
Conference
Conference | 2001 SIGCHI Conference on Human Factors in Computing Systems, CHI 2001 |
---|---|
Abbreviated title | CHI |
Country/Territory | United States |
City | Seattle |
Period | 31/03/01 → 5/04/01 |
Keywords
- HMI-MI: MULTIMODAL INTERACTIONS