• 7 Citations

Abstract

In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech.
Original languageUndefined
Title of host publicationThe 2010 International Joint Conference on Neural Networks (IJCNN)
Place of PublicationUSA
PublisherIEEE
Pages1235-1242
Number of pages8
ISBN (Print)978-1-4244-6916-1
DOIs
StatePublished - 14 Oct 2010
Event2010 International Joint Conference on Neural Networks, IJCNN 2010 - Barcelona, Spain

Publication series

Name
PublisherIEEE

Conference

Conference2010 International Joint Conference on Neural Networks, IJCNN 2010
Abbreviated titleIJCNN
CountrySpain
CityBarcelona
Period18/07/1023/07/10

Fingerprint

Electroencephalography
Signal processing
Brain computer interface
Learning systems
Brain
Pipelines
Experiments

Keywords

  • METIS-271125
  • EWI-18787
  • IR-74665

Cite this

Gürkök, H., Poel, M., & Zwiers, J. (2010). Classifying motor imagery in presence of speech. In The 2010 International Joint Conference on Neural Networks (IJCNN) (pp. 1235-1242). USA: IEEE. DOI: 10.1109/IJCNN.2010.5595733

Gürkök, Hayrettin; Poel, Mannes; Zwiers, Jakob / Classifying motor imagery in presence of speech.

The 2010 International Joint Conference on Neural Networks (IJCNN). USA : IEEE, 2010. p. 1235-1242.

Research output: Scientific - peer-reviewConference contribution

@inbook{3eaad527548347d58588fd5ebcc5bd0a,
title = "Classifying motor imagery in presence of speech",
abstract = "In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech.",
keywords = "METIS-271125, EWI-18787, IR-74665",
author = "Hayrettin Gürkök and Mannes Poel and Jakob Zwiers",
note = "10.1109/IJCNN.2010.5595733",
year = "2010",
month = "10",
doi = "10.1109/IJCNN.2010.5595733",
isbn = "978-1-4244-6916-1",
publisher = "IEEE",
pages = "1235--1242",
booktitle = "The 2010 International Joint Conference on Neural Networks (IJCNN)",

}

Gürkök, H, Poel, M & Zwiers, J 2010, Classifying motor imagery in presence of speech. in The 2010 International Joint Conference on Neural Networks (IJCNN). IEEE, USA, pp. 1235-1242, 2010 International Joint Conference on Neural Networks, IJCNN 2010, Barcelona, Spain, 18-23 July. DOI: 10.1109/IJCNN.2010.5595733

Classifying motor imagery in presence of speech. / Gürkök, Hayrettin; Poel, Mannes; Zwiers, Jakob.

The 2010 International Joint Conference on Neural Networks (IJCNN). USA : IEEE, 2010. p. 1235-1242.

Research output: Scientific - peer-reviewConference contribution

TY - CHAP

T1 - Classifying motor imagery in presence of speech

AU - Gürkök,Hayrettin

AU - Poel,Mannes

AU - Zwiers,Jakob

N1 - 10.1109/IJCNN.2010.5595733

PY - 2010/10/14

Y1 - 2010/10/14

N2 - In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech.

AB - In the near future, brain-computer interface (BCI) applications for non-disabled users will require multimodal interaction and tolerance to dynamic environment. However, this conflicts with the highly sensitive recording techniques used for BCIs, such as electroencephalography (EEG). Advanced machine learning and signal processing techniques are required to decorrelate desired brain signals from the rest. This paper proposes a signal processing pipeline and two classification methods suitable for multiclass EEG analysis. The methods were tested in an experiment on separating left/right hand imagery in presence/absence of speech. The analyses showed that the presence of speech during motor imagery did not affect the classification accuracy significantly and regardless of the presence of speech, the proposed methods were able to separate left and right hand imagery with an accuracy of 60%. The best overall accuracy achieved for the 5-class separation of all the tasks was 47% and both proposed methods performed equally well. In addition, the analysis of event-related spectral power changes revealed characteristics related to motor imagery and speech.

KW - METIS-271125

KW - EWI-18787

KW - IR-74665

U2 - 10.1109/IJCNN.2010.5595733

DO - 10.1109/IJCNN.2010.5595733

M3 - Conference contribution

SN - 978-1-4244-6916-1

SP - 1235

EP - 1242

BT - The 2010 International Joint Conference on Neural Networks (IJCNN)

PB - IEEE

ER -

Gürkök H, Poel M, Zwiers J. Classifying motor imagery in presence of speech. In The 2010 International Joint Conference on Neural Networks (IJCNN). USA: IEEE. 2010. p. 1235-1242. Available from, DOI: 10.1109/IJCNN.2010.5595733