Abstract
Emotion recognition during music listening using electroencephalogram (EEG) has gained more attention from researchers, recently. Many studies focused on accuracy on one subject while subject-independent performance evaluation was still unclear. In this paper, the objective is to create an emotion recognition model that can be applied to multiple subjects. By adopting convolutional neural networks (CNNs), advantage could be gained from utilizing information from electrodes and time steps. Using CNNs also does not need feature extraction which might leave out other related but unobserved features. CNNs with three to seven convolutional layers were deployed in this research. We measured their performance with a binary classification task for compositions of emotions including arousal and valence. The results showed that our method captured EEG signal patterns from numerous subjects by 10-fold cross validation with 81.54% and 86.87% accuracy from arousal and valence respectively. The method also showed a higher capability of generalization to unseen subjects than the previous method as can be observed from the results of leave-one-subject-out validation.
Original language | English |
---|---|
Title of host publication | 2019 IEEE 15th International Colloquium on Signal Processing & Its Applications (CSPA) |
Publisher | IEEE |
Pages | 21-26 |
Number of pages | 6 |
ISBN (Electronic) | 978-1-5386-7563-2 |
ISBN (Print) | 978-1-5386-7564-9 |
DOIs | |
Publication status | Published - 8 Mar 2019 |
Event | 15th IEEE Colloquium on Signal Processing and its Applications, CSPA 2019 - Penang Island, Malaysia Duration: 8 Mar 2019 → 9 Mar 2019 Conference number: 15 |
Conference
Conference | 15th IEEE Colloquium on Signal Processing and its Applications, CSPA 2019 |
---|---|
Abbreviated title | CSPA 2019 |
Country/Territory | Malaysia |
City | Penang Island |
Period | 8/03/19 → 9/03/19 |
Keywords
- Electroencephalography
- Emotion recognition
- Convolutional Neural Network (CNN)