Perceived Mental Workload Classification Using Intermediate Fusion Multimodal Deep Learning

Tenzing C. Dolmans*, Mannes Poel, Jan-Willem J.R. van 't Klooster, Bernard P. Veldkamp

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

180 Downloads (Pure)

Abstract

A lot of research has been done on the detection of mental workload (MWL) using various bio-signals. Recently, deep learning has allowed for novel methods and results. A plethora of measurement modalities have proven to be valuable in this task, yet studies currently often only use a single modality to classify MWL. The goal of this research was to classify perceived mental workload (PMWL) using a deep neural network (DNN) that flexibly makes use of multiple modalities, in order to allow for feature sharing between modalities. To achieve this goal, an experiment was conducted in which MWL was simulated with the help of verbal logic puzzles. The puzzles came in five levels of difficulty and were presented in a random order. Participants had 1 h to solve as many puzzles as they could. Between puzzles, they gave a difficulty rating between 1 and 7, seven being the highest difficulty. Galvanic skin response, photoplethysmograms, functional near-infrared spectrograms and eye movements were collected simultaneously using LabStreamingLayer (LSL). Marker information from the puzzles was also streamed on LSL. We designed and evaluated a novel intermediate fusion multimodal DNN for the classification of PMWL using the aforementioned four modalities. Two main criteria that guided the design and implementation of our DNN are modularity and generalisability. We were able to classify PMWL within-level accurate (0.985 levels) on a seven-level workload scale using the aforementioned modalities. The model architecture allows for easy addition and removal of modalities without major structural implications because of the modular nature of the design. Furthermore, we showed that our neural network performed better when using multiple modalities, as opposed to a single modality. The dataset and code used in this paper are openly available.
Original languageEnglish
Article number609096
JournalFrontiers in human neuroscience
Volume14
DOIs
Publication statusPublished - 11 Jan 2021

Keywords

  • Brain-computer interface (BCI)
  • Deep learning (DL)
  • Multimodal deep learning architecture
  • Device synchronisation
  • fNIRS (functional near infrared spectroscopy)
  • GSR (galvanic skin response)
  • PPG (photoplethysmography)
  • Eye tracking

Fingerprint

Dive into the research topics of 'Perceived Mental Workload Classification Using Intermediate Fusion Multimodal Deep Learning'. Together they form a unique fingerprint.

Cite this