Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/183828
Title: | Data augmentation for EEG emotion recognition | Authors: | Hong, Isaac Zhang Jie | Keywords: | Computer and Information Science | Issue Date: | 2025 | Publisher: | Nanyang Technological University | Source: | Hong, I. Z. J. (2025). Data augmentation for EEG emotion recognition. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/183828 | Project: | CCDS24-0611 | Abstract: | Data Augmentation (DA) techniques hold significant promise for improving the performance of Deep Learning (DL) models in electroencephalography (EEG) analysis. This project aims to evaluate the effectiveness of various DA methods, as outlined in the review paper titled “Data augmentation for deep-learning-based electroencephalography”, by applying them to TSception—a DL model designed for binary classification of EEG-based emotion recognition along the valence-arousal dimensions. The investigated DA categories include Noise Addition, Sliding Window, Recombination of Segmentation, Fourier Transform and Generative Adversarial Networks (GANs). The study utilized the Database for Emotion Analysis using Physiological Signals (DEAP) and the Multimodal Database for Affect Recognition and Implicit Tagging (MAHNOB-HCI), with EEG trials segmented into 4-second windows and evaluated through a trial-wise 10-fold cross-validation with 500 epochs per fold. Results showed that DA did not improve performance on the DEAP dataset for both valence and arousal dimensions. However, for the MAHNOB-HCI dataset, the Recombination of Segmentation (recombDA) technique was the most effective across both dimensions, while the GAN-based augmentation (cWDCGAN-GP) performed the worst. Uniform Manifold Approximation (UMAP) visualizations of data distributions demonstrated that augmented data generally clustered closely with original data of the same class, providing realistic yet varied information that enhanced TSception’s generalizability. However, for subjects that deteriorated under recombDA, augmented data tended to link the clusters together, heightening likelihood of feature distribution uniformity between training and validation sets and reduced model generalizability. To address this issue, DBSCAN-enhanced recombDA was proposed, which effectively identified and removed these ambiguous data points from the augmentation process. While this approach improved the accuracy for more than half of all subjects by refining the augmented data quality, there was a decrease in F1 Macro scores due to exacerbated class imbalance from removal of data points. The study also highlighted the subject- and dimension-dependent nature of EEG DA, where the same DA technique improved performance in some subjects while deteriorating it in others, even displaying varying impacts on the same subject across valence-arousal dimensions. To this end, a Personalized Augmentation (PA) approach was proposed, selecting the DA technique that achieved the highest validation mean F1 Macro score for each subject and dimension. PA outperformed all other methods in the arousal dimension of MAHNOB-HCI and performed comparably to the second-best technique in valence, demonstrating its potential for enhancing model performance by tailoring augmentation strategies at an individual level. Future research could focus on extending these findings to more datasets and EEG paradigms, refining DA approaches like the DBSCAN-enhanced method, and exploring mixed augmentation techniques with other novel DA techniques. | URI: | https://hdl.handle.net/10356/183828 | Schools: | College of Computing and Data Science | Research Centres: | Centre for Brain-Computing Research (CBCR) | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | CCDS Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
isaacHongZhangJie_finalReport.pdf Restricted Access | 11.09 MB | Adobe PDF | View/Open |
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.