Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/152976
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChan, Alvin Guo Weien_US
dc.date.accessioned2021-10-26T01:44:04Z-
dc.date.available2021-10-26T01:44:04Z-
dc.date.issued2021-
dc.identifier.citationChan, A. G. W. (2021). Defences and threats in safe deep learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152976en_US
dc.identifier.urihttps://hdl.handle.net/10356/152976-
dc.description.abstractDeep learning systems are gaining wider adoption due to their remarkable performances in computer vision and natural language tasks. As its applications reach into high stakes and mission-critical areas such as self-driving vehicle, safety of these systems become paramount. A lapse in safety in deep learning models could result in loss of lives and erode trust from the society, marring progress made by technological advances in this field. This thesis addresses the current threats in the safety of deep learning models and defences to counter these threats. Two of the most pressing safety concerns are adversarial examples and data poisoning where malicious actors can subjugate deep learning systems through targeting a model and its training dataset respectively. In this thesis, I make several novel contributions in the fight against these threats. Firstly, I introduce a new defence paradigm against adversarial examples that can boost a model's robustness while absolving the need for high computational resources. Secondly, I propose an approach to transfer resistance against adversarial examples from a model to other models which may be of a different architecture or task, enhancing safety in scenarios where data or computational resources are limited. Thirdly, I present a comprehensive defence pipeline to counter data poisoning by identifying and then neutralizing the poison in a trained model. Finally, I uncover a new data poisoning vulnerability in text-based deep learning models to raise the alarm on the importance and subtlety of such threat.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleDefences and threats in safe deep learningen_US
dc.typeThesis-Doctor of Philosophyen_US
dc.contributor.supervisorOng Yew Soonen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeDoctor of Philosophyen_US
dc.identifier.doi10.32657/10356/152976-
dc.contributor.supervisoremailASYSOng@ntu.edu.sgen_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:SCSE Theses
Files in This Item:
File Description SizeFormat 
Thesis_Final_25Oct21.pdfThesis9.22 MBAdobe PDFView/Open

Page view(s)

114
Updated on Jul 4, 2022

Download(s) 50

99
Updated on Jul 4, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.