GaVe: A webcam-based gaze vending interface using one-point calibration

Zhe Zeng, S. Liu, Hao Cheng, Hailong Liu, Yang Li, Yu Feng, Felix Wilhelm Siebert

Research output: Contribution to journalArticleAcademicpeer-review

5 Downloads (Pure)


Gaze input, i.e., information input via eye of users, represents a promising method for contact-free interaction in human-machine systems. In this paper, we present the GazeVending interface (GaVe), which lets users control actions on a display with their eyes. The interface works on a regular webcam, available on most of today's laptops, and only requires a short one-point calibration before use. GaVe is designed in a hierarchical structure, presenting broad item cluster to users first and subsequently guiding them through another selection round, which allows the presentation of a large number of items. Cluster/item selection in GaVe is based on the dwell time, i.e., the time duration that users look at a given Cluster/item. A user study (N=22) was conducted to test optimal dwell time thresholds and comfortable human-to-display distances. Users' perception of the system, as well as error rates and task completion time were registered. We found that all participants were able to quickly understand and know how to interact with the interface, and showed good performance, selecting a target item within a group of 12 items in 6.76 seconds on average. We provide design guidelines for GaVe and discuss the potentials of the system.
Original languageEnglish
Pages (from-to)1-13
JournalJournal of Eye Movement Research
Issue number1
Publication statusPublished - 2023




Dive into the research topics of 'GaVe: A webcam-based gaze vending interface using one-point calibration'. Together they form a unique fingerprint.

Cite this