Please use this identifier to cite or link to this item:
|Title:||Defences and threats in safe deep learning||Authors:||Chan, Alvin Guo Wei||Keywords:||Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence||Issue Date:||2021||Publisher:||Nanyang Technological University||Source:||Chan, A. G. W. (2021). Defences and threats in safe deep learning. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152976||Abstract:||Deep learning systems are gaining wider adoption due to their remarkable performances in computer vision and natural language tasks. As its applications reach into high stakes and mission-critical areas such as self-driving vehicle, safety of these systems become paramount. A lapse in safety in deep learning models could result in loss of lives and erode trust from the society, marring progress made by technological advances in this field. This thesis addresses the current threats in the safety of deep learning models and defences to counter these threats. Two of the most pressing safety concerns are adversarial examples and data poisoning where malicious actors can subjugate deep learning systems through targeting a model and its training dataset respectively. In this thesis, I make several novel contributions in the fight against these threats. Firstly, I introduce a new defence paradigm against adversarial examples that can boost a model's robustness while absolving the need for high computational resources. Secondly, I propose an approach to transfer resistance against adversarial examples from a model to other models which may be of a different architecture or task, enhancing safety in scenarios where data or computational resources are limited. Thirdly, I present a comprehensive defence pipeline to counter data poisoning by identifying and then neutralizing the poison in a trained model. Finally, I uncover a new data poisoning vulnerability in text-based deep learning models to raise the alarm on the importance and subtlety of such threat.||URI:||https://hdl.handle.net/10356/152976||DOI:||10.32657/10356/152976||Rights:||This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Updated on May 19, 2022
Updated on May 19, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.