Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/184534
Title: Dimensionality reduction in deep neural networks
Authors: Wee, Keane Jin Yen
Keywords: Physics
Issue Date: 2025
Publisher: Nanyang Technological University
Source: Wee, K. J. Y. (2025). Dimensionality reduction in deep neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/184534
Abstract: Deep Neural Networks (DNNs) are powerful Artificial Intelligence (AI) techniques that enable machines to learn complex patterns from high-dimensional data. Due to the increasing complexity of data points and parameters in the model, DNNs experience the curse of dimensionality, where the model's efficiency exponentially decreases as the dimensionality increases. To solve this, dimensionality reduction techniques are often applied to feedforward DNNs to retain essential information and improve computational performance. However, the underlying mechanisms and explainability of these techniques remain insufficiently understood. This project aims to improve the explainability of feedforward DNNs and its usage of dimensionality reduction techniques. Specifically, we explore linear techniques such as Principal Component Analysis (PCA) and Singular Value Decomposition (SVD), alongside the nonlinear t-distributed Stochastic Neighbour Embedding (t-SNE), to evaluate how the feature representations evolve across network layers. We further assess the separability and structure of these representations using clustering methods including k-means algorithms and hierarchical clustering. Experiments were conducted using the MNIST dataset, where we applied dimensionality reduction at each hidden layer and evaluated the compactness of intra-class representations and the separability of inter-class features using distance metrics, heatmaps, and visualization tools. PCA and SVD were used to estimate the number of principal components required to retain 0.95 of the total explained variance, while t-SNE offered more profound insights into non-linear manifold structures. Results show a progressive refinement of feature space through the layers, though certain digits remain ambiguously clustered.
URI: https://hdl.handle.net/10356/184534
Schools: School of Physical and Mathematical Sciences 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SPMS Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Thesissubmission_PH4421_Keane_Wee_Jin_Yen.pdf
  Restricted Access
1.89 MBAdobe PDFView/Open

Page view(s)

37
Updated on May 7, 2025

Download(s)

4
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.