Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/181174
Full metadata record
DC FieldValueLanguage
dc.contributor.authorPang, Song Chenen_US
dc.date.accessioned2024-11-18T01:42:44Z-
dc.date.available2024-11-18T01:42:44Z-
dc.date.issued2024-
dc.identifier.citationPang, S. C. (2024). Protecting deep learning algorithms from model theft. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/181174en_US
dc.identifier.urihttps://hdl.handle.net/10356/181174-
dc.description.abstractThe rise of Deep Neural Network architectures deployed on edge Field Programmable Gate Arrays has introduced new security challenges. Such attacks can potentially reverse-engineer models, compromising their confidentiality and integrity. In this report, we present a defence mechanism aimed at protecting DNNs deployed on edge devices against adversarial attacks. Although the initial goal was to address Side-Channel Attacks, the current implementation effectively safeguards against memory confidentiality and integrity attacks. Our work focuses on the integration of a Memory Integrity Tree within the Versatile Tensor Accelerator to secure memory accesses and detect unauthorized modifications during DNN execution. Key modifications were made to the VTA’s runtime code, specifically the LoadBuffer2D and StoreBuffer2D functions, to enforce memory integrity checks through a Binary Merkle Tree. This structure ensures that each memory block is hashed and verified, maintaining a secure execution environment. The implemented defences were evaluated in terms of performance overhead, while the MIT effectively prevents memory attacks, such as replay attacks, by detecting tampering attempts and protects the DNN model hyperparameters. The integration of cryptographic hash calculations introduced a significant performance cost. Our findings highlight the trade-offs between security and computational efficiency, emphasising the importance of continued refinement to minimize overhead while preserving robust protection against SCAs. This project demonstrates the viability of enhancing security for FPGA-based DNN accelerators through memory integrity checks. Future research should explore optimizations to reduce performance overhead and extend protections to side-channel attacks.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.subjectComputer and Information Scienceen_US
dc.titleProtecting deep learning algorithms from model theften_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLam Siew Keien_US
dc.contributor.schoolCollege of Computing and Data Scienceen_US
dc.description.degreeBachelor's degreeen_US
dc.contributor.supervisoremailASSKLam@ntu.edu.sgen_US
dc.subject.keywordsDeep neural networksen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:CCDS Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
Pang_Song_Chen_SCSE_FYP_Final_Report.pdf
  Restricted Access
1.55 MBAdobe PDFView/Open

Page view(s)

66
Updated on Apr 21, 2025

Download(s)

7
Updated on Apr 21, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.