Gesture Interaction at a Distance

F.W. Fikkert

Research output: ThesisPhD Thesis - Research UT, graduation UTAcademic

111 Downloads (Pure)

Abstract

The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is the focus in this thesis is portrayed in the science fiction movie ‘Minority Report’. The lead character of this movie uses hand gestures such as pointing, picking-up and throwing-away to interact with a wall-sized display in a believable way. Believable, because the gestures are familiar from everyday life and because the interface responds predictably. Although only fictional in this movie, such gesture-based interfaces can, when realized, be applied in any environment that is equipped with large display surfaces. For example, in a laboratory for analyzing and interpreting large data sets; in interactive shopping windows to casually browse a product list; and in the operating room to easily access a patient’s MRI scans. The common denominator is that the user cannot or may not touch the display: the interaction occurs at arms-length and larger distances. Hand and arm movements are the gestures that computer systems interpret in this thesis. The users can control the large display, and its contents, directly with their hands through acts similar to those in ‘Minority Report’. The control is gained through explicitly issuing commands to the system through gesturing. After defining the elementary commands in such an interface (Chapter 2), we index existing approaches to build gesture-based interfaces (Chapter 3) and, more precisely, the gesture sets that have been used in these interfaces. Meticulous investigation of which gestures are suited for issuing these elementary commands, and why, then follows. In a Wizard of Oz setting, we explore the gestures that otherwise uninstructed users make when asked to issue a command through gesturing alone (Chapter 4). By gesturing as they see fit, users pan and zoom a map of the local topology of our university. Our observations show that users apply the same idiosyncratic gesture for each command with a great deal of similarity between users. Also, gestures are explicitly started and ended by changing the hand shape from rest to tensed and back again. Users really believed that they were in actual control of the display; immersed in the interaction that they found believable. This consensus in the observed gestures is explored with an online questionnaire (Chapter 5) filled out by a hundred users from multiple western countries. User ratings of video prototyped interactions through gesturing show that there is significant preference for certain gesture-command pairs. In addition, some gestures are preferably reused in a different context or system state to improve understanding and predicting of the system’s responses. These results are validated in another (partial) Wizard of Oz setting (Chapter 6) where the users experience what it feels like to issue commands with the proposed gestures. The ratings in each investigated condition were similar, with minor differences that are mostly caused by physical comfort, or lack thereof, while gesturing. Our findings were influenced profoundly by both traditional WIMP-style interfaces and recent mainstream multi-touch interfaces that swayed our participants’ preference towards some gestures. To consolidate our previous findings, we designed, built and evaluated a gesture interface with which the user can interact with 3D and 2D visualizations of biochemical structures on a wall-sized display (Chapter 7). This prototype uses lasers for pointing, one for each hand, and small buttons attached to the fingers for issuing commands. The preferred gestures define the precise layout of these buttons on the hand. Again, we found that our participants preferred to interact with the least amount of effort and with the highest comfort possible. There was little variation between users in the shape of the gestures that they preferred: tapping the thumb on one of the other fingers was the prevalent gesture to indicate the beginning and ending of a command: it mimicked pressing a button. When taking a human perspective on gestures suited to issue commands to largedisplay interfaces, it is possible to formulate a set of intuitive gestures that comes naturally to its users. The gestures are learned and remembered with ease. In addition, it is comfortable to perform these gestures, also when interacting for longer periods of time. We observe in our line of research that technological developments that reach mainstream distribution in the public domain influence the perception of ‘intuitive’ and ‘natural’ in the end-users. The best example of this is perhaps the influence of the indoctrination over the past four decades that the keyboardand- mouse interface has had on the public’s notion of human-computer interaction. More recent examples include the Nintendo Wii and the Apple iPhone. We, as the interface designers of future intelligent environments, are very much dependent on this notion. That is, if we wish to have gesture-based interfaces succeed in providing easy to use, intuitive interaction with the pervasive large display surfaces in these environments. The gestures that are described in this thesis are an important part of those interfaces.
Original languageUndefined
Awarding Institution
  • University of Twente
Supervisors/Advisors
  • van der Vet, P.E., Advisor
  • Nijholt, Antinus , Supervisor
  • van der Veer, Gerrit Cornelis, Supervisor
Thesis sponsors
Award date11 Mar 2010
Place of PublicationEnschede
Publisher
Print ISBNs978-90-365-2973-0
DOIs
Publication statusPublished - 11 Mar 2010

Keywords

  • METIS-270740
  • HMI-MI: MULTIMODAL INTERACTIONS
  • IR-69985
  • EWI-17507

Cite this

Fikkert, F. W. (2010). Gesture Interaction at a Distance. Enschede: Centre for Telematics and Information Technology (CTIT). https://doi.org/10.3990/1.9789036529730
Fikkert, F.W.. / Gesture Interaction at a Distance. Enschede : Centre for Telematics and Information Technology (CTIT), 2010. 161 p.
@phdthesis{80375208519346a59a80012977285fa7,
title = "Gesture Interaction at a Distance",
abstract = "The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is the focus in this thesis is portrayed in the science fiction movie ‘Minority Report’. The lead character of this movie uses hand gestures such as pointing, picking-up and throwing-away to interact with a wall-sized display in a believable way. Believable, because the gestures are familiar from everyday life and because the interface responds predictably. Although only fictional in this movie, such gesture-based interfaces can, when realized, be applied in any environment that is equipped with large display surfaces. For example, in a laboratory for analyzing and interpreting large data sets; in interactive shopping windows to casually browse a product list; and in the operating room to easily access a patient’s MRI scans. The common denominator is that the user cannot or may not touch the display: the interaction occurs at arms-length and larger distances. Hand and arm movements are the gestures that computer systems interpret in this thesis. The users can control the large display, and its contents, directly with their hands through acts similar to those in ‘Minority Report’. The control is gained through explicitly issuing commands to the system through gesturing. After defining the elementary commands in such an interface (Chapter 2), we index existing approaches to build gesture-based interfaces (Chapter 3) and, more precisely, the gesture sets that have been used in these interfaces. Meticulous investigation of which gestures are suited for issuing these elementary commands, and why, then follows. In a Wizard of Oz setting, we explore the gestures that otherwise uninstructed users make when asked to issue a command through gesturing alone (Chapter 4). By gesturing as they see fit, users pan and zoom a map of the local topology of our university. Our observations show that users apply the same idiosyncratic gesture for each command with a great deal of similarity between users. Also, gestures are explicitly started and ended by changing the hand shape from rest to tensed and back again. Users really believed that they were in actual control of the display; immersed in the interaction that they found believable. This consensus in the observed gestures is explored with an online questionnaire (Chapter 5) filled out by a hundred users from multiple western countries. User ratings of video prototyped interactions through gesturing show that there is significant preference for certain gesture-command pairs. In addition, some gestures are preferably reused in a different context or system state to improve understanding and predicting of the system’s responses. These results are validated in another (partial) Wizard of Oz setting (Chapter 6) where the users experience what it feels like to issue commands with the proposed gestures. The ratings in each investigated condition were similar, with minor differences that are mostly caused by physical comfort, or lack thereof, while gesturing. Our findings were influenced profoundly by both traditional WIMP-style interfaces and recent mainstream multi-touch interfaces that swayed our participants’ preference towards some gestures. To consolidate our previous findings, we designed, built and evaluated a gesture interface with which the user can interact with 3D and 2D visualizations of biochemical structures on a wall-sized display (Chapter 7). This prototype uses lasers for pointing, one for each hand, and small buttons attached to the fingers for issuing commands. The preferred gestures define the precise layout of these buttons on the hand. Again, we found that our participants preferred to interact with the least amount of effort and with the highest comfort possible. There was little variation between users in the shape of the gestures that they preferred: tapping the thumb on one of the other fingers was the prevalent gesture to indicate the beginning and ending of a command: it mimicked pressing a button. When taking a human perspective on gestures suited to issue commands to largedisplay interfaces, it is possible to formulate a set of intuitive gestures that comes naturally to its users. The gestures are learned and remembered with ease. In addition, it is comfortable to perform these gestures, also when interacting for longer periods of time. We observe in our line of research that technological developments that reach mainstream distribution in the public domain influence the perception of ‘intuitive’ and ‘natural’ in the end-users. The best example of this is perhaps the influence of the indoctrination over the past four decades that the keyboardand- mouse interface has had on the public’s notion of human-computer interaction. More recent examples include the Nintendo Wii and the Apple iPhone. We, as the interface designers of future intelligent environments, are very much dependent on this notion. That is, if we wish to have gesture-based interfaces succeed in providing easy to use, intuitive interaction with the pervasive large display surfaces in these environments. The gestures that are described in this thesis are an important part of those interfaces.",
keywords = "METIS-270740, HMI-MI: MULTIMODAL INTERACTIONS, IR-69985, EWI-17507",
author = "F.W. Fikkert",
note = "SIKS Dissertation Series No. 2010-07",
year = "2010",
month = "3",
day = "11",
doi = "10.3990/1.9789036529730",
language = "Undefined",
isbn = "978-90-365-2973-0",
publisher = "Centre for Telematics and Information Technology (CTIT)",
address = "Netherlands",
school = "University of Twente",

}

Fikkert, FW 2010, 'Gesture Interaction at a Distance', University of Twente, Enschede. https://doi.org/10.3990/1.9789036529730

Gesture Interaction at a Distance. / Fikkert, F.W.

Enschede : Centre for Telematics and Information Technology (CTIT), 2010. 161 p.

Research output: ThesisPhD Thesis - Research UT, graduation UTAcademic

TY - THES

T1 - Gesture Interaction at a Distance

AU - Fikkert, F.W.

N1 - SIKS Dissertation Series No. 2010-07

PY - 2010/3/11

Y1 - 2010/3/11

N2 - The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is the focus in this thesis is portrayed in the science fiction movie ‘Minority Report’. The lead character of this movie uses hand gestures such as pointing, picking-up and throwing-away to interact with a wall-sized display in a believable way. Believable, because the gestures are familiar from everyday life and because the interface responds predictably. Although only fictional in this movie, such gesture-based interfaces can, when realized, be applied in any environment that is equipped with large display surfaces. For example, in a laboratory for analyzing and interpreting large data sets; in interactive shopping windows to casually browse a product list; and in the operating room to easily access a patient’s MRI scans. The common denominator is that the user cannot or may not touch the display: the interaction occurs at arms-length and larger distances. Hand and arm movements are the gestures that computer systems interpret in this thesis. The users can control the large display, and its contents, directly with their hands through acts similar to those in ‘Minority Report’. The control is gained through explicitly issuing commands to the system through gesturing. After defining the elementary commands in such an interface (Chapter 2), we index existing approaches to build gesture-based interfaces (Chapter 3) and, more precisely, the gesture sets that have been used in these interfaces. Meticulous investigation of which gestures are suited for issuing these elementary commands, and why, then follows. In a Wizard of Oz setting, we explore the gestures that otherwise uninstructed users make when asked to issue a command through gesturing alone (Chapter 4). By gesturing as they see fit, users pan and zoom a map of the local topology of our university. Our observations show that users apply the same idiosyncratic gesture for each command with a great deal of similarity between users. Also, gestures are explicitly started and ended by changing the hand shape from rest to tensed and back again. Users really believed that they were in actual control of the display; immersed in the interaction that they found believable. This consensus in the observed gestures is explored with an online questionnaire (Chapter 5) filled out by a hundred users from multiple western countries. User ratings of video prototyped interactions through gesturing show that there is significant preference for certain gesture-command pairs. In addition, some gestures are preferably reused in a different context or system state to improve understanding and predicting of the system’s responses. These results are validated in another (partial) Wizard of Oz setting (Chapter 6) where the users experience what it feels like to issue commands with the proposed gestures. The ratings in each investigated condition were similar, with minor differences that are mostly caused by physical comfort, or lack thereof, while gesturing. Our findings were influenced profoundly by both traditional WIMP-style interfaces and recent mainstream multi-touch interfaces that swayed our participants’ preference towards some gestures. To consolidate our previous findings, we designed, built and evaluated a gesture interface with which the user can interact with 3D and 2D visualizations of biochemical structures on a wall-sized display (Chapter 7). This prototype uses lasers for pointing, one for each hand, and small buttons attached to the fingers for issuing commands. The preferred gestures define the precise layout of these buttons on the hand. Again, we found that our participants preferred to interact with the least amount of effort and with the highest comfort possible. There was little variation between users in the shape of the gestures that they preferred: tapping the thumb on one of the other fingers was the prevalent gesture to indicate the beginning and ending of a command: it mimicked pressing a button. When taking a human perspective on gestures suited to issue commands to largedisplay interfaces, it is possible to formulate a set of intuitive gestures that comes naturally to its users. The gestures are learned and remembered with ease. In addition, it is comfortable to perform these gestures, also when interacting for longer periods of time. We observe in our line of research that technological developments that reach mainstream distribution in the public domain influence the perception of ‘intuitive’ and ‘natural’ in the end-users. The best example of this is perhaps the influence of the indoctrination over the past four decades that the keyboardand- mouse interface has had on the public’s notion of human-computer interaction. More recent examples include the Nintendo Wii and the Apple iPhone. We, as the interface designers of future intelligent environments, are very much dependent on this notion. That is, if we wish to have gesture-based interfaces succeed in providing easy to use, intuitive interaction with the pervasive large display surfaces in these environments. The gestures that are described in this thesis are an important part of those interfaces.

AB - The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is the focus in this thesis is portrayed in the science fiction movie ‘Minority Report’. The lead character of this movie uses hand gestures such as pointing, picking-up and throwing-away to interact with a wall-sized display in a believable way. Believable, because the gestures are familiar from everyday life and because the interface responds predictably. Although only fictional in this movie, such gesture-based interfaces can, when realized, be applied in any environment that is equipped with large display surfaces. For example, in a laboratory for analyzing and interpreting large data sets; in interactive shopping windows to casually browse a product list; and in the operating room to easily access a patient’s MRI scans. The common denominator is that the user cannot or may not touch the display: the interaction occurs at arms-length and larger distances. Hand and arm movements are the gestures that computer systems interpret in this thesis. The users can control the large display, and its contents, directly with their hands through acts similar to those in ‘Minority Report’. The control is gained through explicitly issuing commands to the system through gesturing. After defining the elementary commands in such an interface (Chapter 2), we index existing approaches to build gesture-based interfaces (Chapter 3) and, more precisely, the gesture sets that have been used in these interfaces. Meticulous investigation of which gestures are suited for issuing these elementary commands, and why, then follows. In a Wizard of Oz setting, we explore the gestures that otherwise uninstructed users make when asked to issue a command through gesturing alone (Chapter 4). By gesturing as they see fit, users pan and zoom a map of the local topology of our university. Our observations show that users apply the same idiosyncratic gesture for each command with a great deal of similarity between users. Also, gestures are explicitly started and ended by changing the hand shape from rest to tensed and back again. Users really believed that they were in actual control of the display; immersed in the interaction that they found believable. This consensus in the observed gestures is explored with an online questionnaire (Chapter 5) filled out by a hundred users from multiple western countries. User ratings of video prototyped interactions through gesturing show that there is significant preference for certain gesture-command pairs. In addition, some gestures are preferably reused in a different context or system state to improve understanding and predicting of the system’s responses. These results are validated in another (partial) Wizard of Oz setting (Chapter 6) where the users experience what it feels like to issue commands with the proposed gestures. The ratings in each investigated condition were similar, with minor differences that are mostly caused by physical comfort, or lack thereof, while gesturing. Our findings were influenced profoundly by both traditional WIMP-style interfaces and recent mainstream multi-touch interfaces that swayed our participants’ preference towards some gestures. To consolidate our previous findings, we designed, built and evaluated a gesture interface with which the user can interact with 3D and 2D visualizations of biochemical structures on a wall-sized display (Chapter 7). This prototype uses lasers for pointing, one for each hand, and small buttons attached to the fingers for issuing commands. The preferred gestures define the precise layout of these buttons on the hand. Again, we found that our participants preferred to interact with the least amount of effort and with the highest comfort possible. There was little variation between users in the shape of the gestures that they preferred: tapping the thumb on one of the other fingers was the prevalent gesture to indicate the beginning and ending of a command: it mimicked pressing a button. When taking a human perspective on gestures suited to issue commands to largedisplay interfaces, it is possible to formulate a set of intuitive gestures that comes naturally to its users. The gestures are learned and remembered with ease. In addition, it is comfortable to perform these gestures, also when interacting for longer periods of time. We observe in our line of research that technological developments that reach mainstream distribution in the public domain influence the perception of ‘intuitive’ and ‘natural’ in the end-users. The best example of this is perhaps the influence of the indoctrination over the past four decades that the keyboardand- mouse interface has had on the public’s notion of human-computer interaction. More recent examples include the Nintendo Wii and the Apple iPhone. We, as the interface designers of future intelligent environments, are very much dependent on this notion. That is, if we wish to have gesture-based interfaces succeed in providing easy to use, intuitive interaction with the pervasive large display surfaces in these environments. The gestures that are described in this thesis are an important part of those interfaces.

KW - METIS-270740

KW - HMI-MI: MULTIMODAL INTERACTIONS

KW - IR-69985

KW - EWI-17507

U2 - 10.3990/1.9789036529730

DO - 10.3990/1.9789036529730

M3 - PhD Thesis - Research UT, graduation UT

SN - 978-90-365-2973-0

PB - Centre for Telematics and Information Technology (CTIT)

CY - Enschede

ER -

Fikkert FW. Gesture Interaction at a Distance. Enschede: Centre for Telematics and Information Technology (CTIT), 2010. 161 p. https://doi.org/10.3990/1.9789036529730