2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
Download PDF

Abstract

Several studies have shown that electroencephalogram (EEG) can be interfered by eye movements and facial muscle movements. These interferences can reduce the accuracy of EEG emotion recognition. In terms of model selection, EEG-based emotion recognition studies mainly use convolutional neural networks and similar methods. These methods rely on global differences to distinguish different emotional states, but overlook the influence of local EEG changes on emotional states. In this paper, we use four-dimensional artificial EEG features as input to a Spatio-Temporal Swin Transformer. The use of artificial EEG features is beneficial for the interpretability of the model. Spatiotemporal attention modules can extract secondary features and useful information from EEG signals while eliminating redundant noise interference. Compared with other emotion recognition methods, the advantage of our study is that the input and output of the spatio-temporal attention modules are consistent. It can adapt to the original input size of the model and can be widely applied as a separable module in other models. We use the public emotion EEG datasets SEED and SEED-IV to evaluate the feasibility and effectiveness of our model. The accuracy of single-subject and cross-subject emotion three-class classification on the SEED dataset is 94.13% ± 2.11% and 89.33% ± 4.37%, respectively. The accuracy of single-subject emotion four-class classification is 82.34% ± 8.17% on the SEED-IV dataset. Our study achieves high accuracy in emotion classification and real-time processing.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!