Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/161791
Title: | Visual-to-EEG cross-modal knowledge distillation for continuous emotion recognition | Authors: | Zhang, Su Tang, Chuangao Guan, Cuntai |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2022 | Source: | Zhang, S., Tang, C. & Guan, C. (2022). Visual-to-EEG cross-modal knowledge distillation for continuous emotion recognition. Pattern Recognition, 130, 108833-. https://dx.doi.org/10.1016/j.patcog.2022.108833 | Project: | A20G8b0102 | Journal: | Pattern Recognition | Abstract: | Visual modality is one of the most dominant modalities for current continuous emotion recognition methods. Compared to which the EEG modality is relatively less sound due to its intrinsic limitation such as subject bias and low spatial resolution. This work attempts to improve the continuous prediction of the EEG modality by using the dark knowledge from the visual modality. The teacher model is built by a cascade convolutional neural network - temporal convolutional network (CNN-TCN) architecture, and the student model is built by TCNs. They are fed by video frames and EEG average band power features, respectively. Two data partitioning schemes are employed, i.e., the trial-level random shuffling (TRS) and the leave-one-subject-out (LOSO). The standalone teacher and student can produce continuous prediction superior to the baseline method, and the employment of the visual-to-EEG cross-modal KD further improves the prediction with statistical significance, i.e., p-value <0.01 for TRS and p-value <0.05 for LOSO partitioning. The saliency maps of the trained student model show that the brain areas associated with the active valence state are not located in precise brain areas. Instead, it results from synchronized activity among various brain areas. And the fast beta and gamma waves, with the frequency of 18−30Hz and 30−45Hz, contribute the most to the human emotion process compared to other bands. The code is available at https://github.com/sucv/Visual_to_EEG_Cross_Modal_KD_for_CER. | URI: | https://hdl.handle.net/10356/161791 | ISSN: | 0031-3203 | DOI: | 10.1016/j.patcog.2022.108833 | Schools: | School of Computer Science and Engineering | Rights: | © 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S0031320322003144-main.pdf | 2.24 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
20
10
Updated on Sep 22, 2023
Web of ScienceTM
Citations
50
3
Updated on Sep 26, 2023
Page view(s)
72
Updated on Sep 26, 2023
Download(s) 50
28
Updated on Sep 26, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.