TY - CHAP
T1 - Multimodal (and more physical) Interaction to Improve In-Car User Experience
AU - Ranasinghe, Champika
PY - 2021
Y1 - 2021
N2 - Automated vehicles have the potential to provide increased mobility to a broader range of users such as the elderly, children, people with physical limitations (e.g., visually impaired people, people with broken legs/ arms), or people with other types of limitations such as for example people who are nervous of driving [1] [2]. On the other hand, driving becomes a secondary task and the drivers (and the passengers) can engage in various other tasks such as work, leisure, or even sleep. This requires interacting with the vehicle at various levels such as for example to take the control of driving when mediation is necessary, making driving-related decisions or for the purpose of other tasks the user is engaged in (such as online meetings of collogues using the car’s infrastructure). This often involves two types of interaction: interaction for the purpose of the primary task (what the user is currently doing, for example playing a game) and the interaction for secondary tasks while the user is engaged in and her attention is on another primary task (such as for example, for receiving the status of the traffic ahead while the user is playing a game with the aid of car’s infrastructure). Towards better facilitating these interactions, autonomous vehicles can benefit from multimodal interaction, and the use of different (and often multiple) human sensory modalities. On the one hand, not much research has been done on how different human sensory modalities can be best used to facilitate in-car interaction. On the other hand, users of autonomous vehicles, use situations, and what interaction requirements these users have remained largely unexplored. Except for speech and haptic-based interaction, a little is known about using other modalities such as gestures, olfaction, and sonification and how they can be used for different types of users and use situations. We aim to fill this gap by exploring how various types of sensory modalities can be used to enrich in-car interaction of various user groups and use situations.
AB - Automated vehicles have the potential to provide increased mobility to a broader range of users such as the elderly, children, people with physical limitations (e.g., visually impaired people, people with broken legs/ arms), or people with other types of limitations such as for example people who are nervous of driving [1] [2]. On the other hand, driving becomes a secondary task and the drivers (and the passengers) can engage in various other tasks such as work, leisure, or even sleep. This requires interacting with the vehicle at various levels such as for example to take the control of driving when mediation is necessary, making driving-related decisions or for the purpose of other tasks the user is engaged in (such as online meetings of collogues using the car’s infrastructure). This often involves two types of interaction: interaction for the purpose of the primary task (what the user is currently doing, for example playing a game) and the interaction for secondary tasks while the user is engaged in and her attention is on another primary task (such as for example, for receiving the status of the traffic ahead while the user is playing a game with the aid of car’s infrastructure). Towards better facilitating these interactions, autonomous vehicles can benefit from multimodal interaction, and the use of different (and often multiple) human sensory modalities. On the one hand, not much research has been done on how different human sensory modalities can be best used to facilitate in-car interaction. On the other hand, users of autonomous vehicles, use situations, and what interaction requirements these users have remained largely unexplored. Except for speech and haptic-based interaction, a little is known about using other modalities such as gestures, olfaction, and sonification and how they can be used for different types of users and use situations. We aim to fill this gap by exploring how various types of sensory modalities can be used to enrich in-car interaction of various user groups and use situations.
U2 - 10.4230/DagRep.11.5.23
DO - 10.4230/DagRep.11.5.23
M3 - Chapter
T3 - Dagstuhl Reports
SP - 33
EP - 34
BT - Human-Computer Interaction to Support Work and Wellbeing in Mobile Environments
PB - Dagstuhl
CY - Dagstuhl, Germany
T2 - Dagstuhl Seminar on Human-Computer Interaction to Support Work and Wellbeing in Mobile Environments 2021
Y2 - 6 June 2021 through 11 June 2021
ER -