Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/175191
Title: Protecting neural networks from adversarial attacks
Authors: Lim, Xin Yi
Keywords: Computer and Information Science
Issue Date: 2024
Publisher: Nanyang Technological University
Source: Lim, X. Y. (2024). Protecting neural networks from adversarial attacks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/175191
Project: SCSE23-0259 
Abstract: Deep learning has become very popular in recent years and naturally, there are rising concerns about protecting the Intellectual Property (IP) rights of these models. Building and training deep learning models, such as Convolutional Neural Networks (CNNs), require in-depth technical expertise, computational resources, large amounts of data, and time. Hence, the motivation to prevent the theft of such valuable models. There exist two robust frameworks to do so, namely watermarking and locking. Watermarking allows validation of the original ownership of a model, whereas locking aims to encrypt the model such that only authorized access can produce accurate results. This report presents a workflow applying both watermarking and locking techniques to various image classification models and shows how both techniques can work hand in hand without compromising the model’s performance.
URI: https://hdl.handle.net/10356/175191
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Amended_FYP_Lim_Xin_Yi.pdf
  Restricted Access
3.42 MBAdobe PDFView/Open

Page view(s)

110
Updated on May 7, 2025

Download(s)

11
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.