Abstract
Driver fatigue detection serves as an essential tool in the process of human-machine co-driving, capable of promptly alerting intelligent systems to assume control of the vehicle. The detection of driver fatigue based on EEG signals currently stands as a primary method for assessing the mental state of individuals. However, at the same time, there are still some problems with EEG signals, such as being prone to noise interference and low cross-subject recognition rate. In addressing the aforementioned issues, this study introduces a method for modeling driver fatigue through the fusion of features extracted from EEG and EOG signals. The proposed model incorporates high-precision EEG equipment and good cross-subject EOG equipment as input. Comprising EEGNet, SANet (Spatial Attention Network), SCONV (and Separable Convolutional Network), the model employs a multi-modal approach to assess driver performance and quantify fatigue. The model effectiveness was verified on the SEED-VIG data set. The test results show that the recognition accuracy of this method reaches 91%, which is better than the currently popular model. At the same time, the cross-subject recognition accuracy test was conducted, and the average accuracy reached 87%, which proved the generalization performance of the model across subjects and verified the effectiveness of the model.