Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/170902
Title: | Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans | Authors: | Zhang, Yanteng He, Xiaohai Chan, Yi Hao Teng, Qizhi Rajapakse, Jagath Chandana |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2023 | Source: | Zhang, Y., He, X., Chan, Y. H., Teng, Q. & Rajapakse, J. C. (2023). Multi-modal graph neural network for early diagnosis of Alzheimer's disease from sMRI and PET scans. Computers in Biology and Medicine, 164, 107328-. https://dx.doi.org/10.1016/j.compbiomed.2023.107328 | Project: | MOE-2EP20121-003 | Journal: | Computers in Biology and Medicine | Abstract: | In recent years, deep learning models have been applied to neuroimaging data for early diagnosis of Alzheimer's disease (AD). Structural magnetic resonance imaging (sMRI) and positron emission tomography (PET) images provide structural and functional information about the brain, respectively. Combining these features leads to improved performance than using a single modality alone in building predictive models for AD diagnosis. However, current multi-modal approaches in deep learning, based on sMRI and PET, are mostly limited to convolutional neural networks, which do not facilitate integration of both image and phenotypic information of subjects. We propose to use graph neural networks (GNN) that are designed to deal with problems in non-Euclidean domains. In this study, we demonstrate how brain networks are created from sMRI or PET images and can be used in a population graph framework that combines phenotypic information with imaging features of the brain networks. Then, we present a multi-modal GNN framework where each modality has its own branch of GNN and a technique that combines the multi-modal data at both the level of node vectors and adjacency matrices. Finally, we perform late fusion to combine the preliminary decisions made in each branch and produce a final prediction. As multi-modality data becomes available, multi-source and multi-modal is the trend of AD diagnosis. We conducted explorative experiments based on multi-modal imaging data combined with non-imaging phenotypic information for AD diagnosis and analyzed the impact of phenotypic information on diagnostic performance. Results from experiments demonstrated that our proposed multi-modal approach improves performance for AD diagnosis. Our study also provides technical reference and support the need for multivariate multi-modal diagnosis methods. | URI: | https://hdl.handle.net/10356/170902 | ISSN: | 0010-4825 | DOI: | 10.1016/j.compbiomed.2023.107328 | Schools: | School of Computer Science and Engineering | Rights: | © 2023 Elsevier Ltd. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1016/j.compbiomed.2023.107328. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Multi-modal Graph Neural Network for Early Diagnosis of Alzheimer's Disease from sMRI and PET Scans.pdf | 5.3 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
20
22
Updated on May 4, 2025
Page view(s)
131
Updated on May 4, 2025
Download(s) 50
26
Updated on May 4, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.