Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/172792
Title: A transformer-based deep neural network model for SSVEP classification
Authors: Chen, Jianbo
Zhang, Yangsong
Pan, Yudong
Xu, Peng
Guan, Cuntai
Keywords: Engineering::Computer science and engineering
Issue Date: 2023
Source: Chen, J., Zhang, Y., Pan, Y., Xu, P. & Guan, C. (2023). A transformer-based deep neural network model for SSVEP classification. Neural Networks, 164, 521-534. https://dx.doi.org/10.1016/j.neunet.2023.04.045
Journal: Neural Networks 
Abstract: Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signals in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data becomes urgent. In recent years, developing the methods that can work in inter-subject scenario has become a promising new direction. As a popular deep learning model nowadays, Transformer has been used in EEG signal classification tasks owing to its excellent performance. Therefore, in this study, we proposed a deep learning model for SSVEP classification based on Transformer architecture in inter-subject scenario, termed as SSVEPformer, which was the first application of Transformer on the SSVEP classification. Inspired by previous studies, we adopted the complex spectrum features of SSVEP data as the model input, which could enable the model to simultaneously explore the spectral and spatial information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) was proposed to improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12 targets; Dataset 2: 35 subjects, 40 targets). The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate than other baseline methods. The proposed models validate the feasibility of deep learning models based on Transformer architecture for SSVEP data classification, and could serve as potential models to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
URI: https://hdl.handle.net/10356/172792
ISSN: 0893-6080
DOI: 10.1016/j.neunet.2023.04.045
Schools: School of Computer Science and Engineering 
Rights: © 2023 Elsevier Ltd. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 10

45
Updated on Mar 19, 2025

Page view(s)

123
Updated on Mar 20, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.