Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/149008
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChan, Jarod Yan Chengen_US
dc.date.accessioned2021-05-24T12:24:16Z-
dc.date.available2021-05-24T12:24:16Z-
dc.date.issued2021-
dc.identifier.citationChan, J. Y. C. (2021). Defence on unrestricted adversarial examples. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/149008en_US
dc.identifier.urihttps://hdl.handle.net/10356/149008-
dc.description.abstractDeep neural networks in image classification have gained popularity in recent years, and as such, have also become the target of attacks. Adversarial samples are inputs crafted to fool neural networks into misclassification. They come in two forms: one is created by adding specific perturbations to pixels in an image and the second is through generative models or transformations, called unrestricted adversarial samples, which will be the focus of this paper. Conventional methods that make use of the neural network’s gradients are less effective against unrestricted adversarial samples. This paper proposes making use of Generative Adversarial Networks (GANs) which are neural networks that generate images through learning the differences between real and fake images. Transfer learning is used from parts of the GAN to train a general network to distinguish between images created by generative models and real images. Neural networks can be protected from unrestricted adversarial attack through detection of the presence of adversarial images and prevent them from being input to the neural networks. Experiments from the project show that when trained on a dataset of real and adversarial images, the model can differentiate these two classes of images. Testing on images outside of the dataset distribution however yields worse results.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationSCSE20-0292en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleDefence on unrestricted adversarial examplesen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorJun Zhaoen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US
dc.contributor.supervisoremailjunzhao@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP Final.pdf
  Restricted Access
1.58 MBAdobe PDFView/Open

Page view(s)

170
Updated on Jun 26, 2022

Download(s)

8
Updated on Jun 26, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.