For decades, brain–computer interfaces (BCIs) have been used for restoring the communication and mobility of disabled people through applications such as spellers, web browsers, and wheelchair controls. In parallel to advances in computational intelligence and the production of consumer BCI products, BCIs have recently started to be considered as alternative modalities in human–computer interaction (HCI). One of the popular topics in HCI is multimodal interaction (MMI), which deals with combining multiple modalities in order to provide powerful, flexible, adaptable, and natural interfaces. This article discusses the situation of BCI as a modality within MMI research. State-of-the-art, real-time multimodal BCI applications are surveyed in order to demonstrate how BCI can be helpful as a modality in MMI. It is shown that multimodal use of BCIs can improve error handling, task performance, and user experience and that they can broaden the user spectrum. The techniques for employing BCI in MMI are described, and the experimental and technical challenges with some guidelines to overcome these are shown. Issues in input fusion, output fission, integration architectures, and data collection are covered.
|Number of pages||16|
|Journal||International journal of human-computer interaction|
|Publication status||Published - Apr 2012|
- HMI-CI: Computational Intelligence
- HMI-IA: Intelligent Agents
- HMI-MI: MULTIMODAL INTERACTIONS