Abstract
In this paper, we discuss why, in designing multiparty mediated systems, we should focus first on providing non-verbal cues which are less redundantly coded in speech than those normally conveyed by video. We show how conveying one such cue, gaze direction, may solve two problems in multiparty mediated communication and collaboration: knowing who is talking to whom, and who is talking about what. As a candidate solution, we present the GAZE Groupware System, which combines support for gaze awareness in multiparty mediated communication and collaboration with small and linear bandwidth requirements. The system uses an advanced, deskmounted eyetracker to metaphorically convey gaze awareness in a 3D virtual meeting room and within shared documents.
Original language | English |
---|---|
Title of host publication | CHI '99 |
Subtitle of host publication | Proceedings of the SIGCHI conference on Human Factors in Computing Systems |
Publisher | Association for Computing Machinery |
Pages | 294-301 |
Number of pages | 8 |
ISBN (Print) | 0-201-48559-1 |
DOIs | |
Publication status | Published - 6 Jan 1999 |
Event | 1999 SIGCHI Conference on Human Factors in Computing Systems, CHI 1999 - Pittsburgh, United States Duration: 15 May 1999 → 20 May 1999 |
Conference
Conference | 1999 SIGCHI Conference on Human Factors in Computing Systems, CHI 1999 |
---|---|
Abbreviated title | CHI |
Country/Territory | United States |
City | Pittsburgh |
Period | 15/05/99 → 20/05/99 |
Keywords
- METIS-137097
- CSCW
- Multiparty videoconferencing
- Awareness
- Attention
- Gaze direction
- VRML 2