Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/183812
Title: Multimodal medical data analysis using deep neural network
Authors: Choo, Darren Jian Hao
Keywords: Computer and Information Science
Medicine, Health and Life Sciences
Issue Date: 2025
Publisher: Nanyang Technological University
Source: Choo, D. J. H. (2025). Multimodal medical data analysis using deep neural network. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/183812
Project: CCDS24-0658
Abstract: The integration of multiple modalities has become a promising approach in the medical field to address limitations of single-sourced data. This project explores multimodal medical data analysis using deep neural networks, to improve the classification of Alzheimer’s Disease (AD), Mild Cognitive Impairment (MCI), and healthy subjects (CN). Acquiring data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), this study incorporates 2 distinct modalities: T1-weighted MRI and T2 FLAIR-weighted MRI. The project aims to enhance classification accuracy beyond what individual modalities can achieve. The proposed model in this project is a pretrained 3D Convolutional Neural Network (3D-CNN), 3D MobileNetV2 1.0x using pretrained weights from Kinetics-600 video dataset. Various fusion techniques will be explored to provide a comprehensive understanding on the capabilities of multimodal data. This includes Early, Late, Intermediate and the project’s own Novelty Fusion. The best performing classification accuracy was 0.898, achieved by Novelty Fusion (Early + Intermediate). Comparatively, this outperformed single-modality scores, where T1-weighted MRI obtained 0.4926 and T2-weighted MRI obtained 0.5519. The significant improvement in accuracy showcased the effectiveness of multimodal integration, in particular for more complex fusion variations. While this model only used 2 MRI modalities, such a framework is highly adaptable to other data types such as positron emission tomography (PET), electrocardiogram (ECG) and non-imaging modalities like clinical assessments and audio-based data. This flexibility lays the groundwork for building medical diagnostic systems that utilise a wide array of patient information. It is hoped that the insights gained from this study will contribute to future medical studies, aiding in more timely, accurate and precise patient care.
URI: https://hdl.handle.net/10356/183812
Schools: College of Computing and Data Science 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:CCDS Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
CCDS24-0658 Final Report.pdf
  Restricted Access
3.33 MBAdobe PDFView/Open

Page view(s)

28
Updated on May 7, 2025

Download(s)

1
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.