Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/146014
Title: Parallel spatial-temporal self-attention CNN-based motor imagery classification for BCI
Authors: Liu, Xiuling
Shen, Yonglong
Liu, Jing
Yang, Jianli
Xiong, Peng
Lin, Feng
Keywords: Engineering::Computer science and engineering
Issue Date: 2020
Source: Liu, X., Shen, Y., Liu, J., Yang, J., Xiong, P., & Lin, F. (2020). Parallel Spatial–Temporal Self-Attention CNN-Based Motor Imagery Classification for BCI. Frontiers in Neuroscience, 14, 587520-. doi:10.3389/fnins.2020.587520
Journal: Frontiers in neuroscience 
Abstract: Motor imagery (MI) electroencephalography (EEG) classification is an important part of the brain-computer interface (BCI), allowing people with mobility problems to communicate with the outside world via assistive devices. However, EEG decoding is a challenging task because of its complexity, dynamic nature, and low signal-to-noise ratio. Designing an end-to-end framework that fully extracts the high-level features of EEG signals remains a challenge. In this study, we present a parallel spatial-temporal self-attention-based convolutional neural network for four-class MI EEG signal classification. This study is the first to define a new spatial-temporal representation of raw EEG signals that uses the self-attention mechanism to extract distinguishable spatial-temporal features. Specifically, we use the spatial self-attention module to capture the spatial dependencies between the channels of MI EEG signals. This module updates each channel by aggregating features over all channels with a weighted summation, thus improving the classification accuracy and eliminating the artifacts caused by manual channel selection. Furthermore, the temporal self-attention module encodes the global temporal information into features for each sampling time step, so that the high-level temporal features of the MI EEG signals can be extracted in the time domain. Quantitative analysis shows that our method outperforms state-of-the-art methods for intra-subject and inter-subject classification, demonstrating its robustness and effectiveness. In terms of qualitative analysis, we perform a visual inspection of the new spatial-temporal representation estimated from the learned architecture. Finally, the proposed method is employed to realize control of drones based on EEG signal, verifying its feasibility in real-time applications.
URI: https://hdl.handle.net/10356/146014
ISSN: 1662-4548
DOI: 10.3389/fnins.2020.587520
Schools: School of Computer Science and Engineering 
Rights: © 2020 Liu, Shen, Liu, Yang, Xiong and Lin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Journal Articles

Files in This Item:
File Description SizeFormat 
fnins-14-587520.pdf1.67 MBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 10

57
Updated on Mar 24, 2025

Web of ScienceTM
Citations 10

22
Updated on Oct 29, 2023

Page view(s)

324
Updated on Mar 24, 2025

Download(s) 50

171
Updated on Mar 24, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.