Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/148573
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLuo, Jinqien_US
dc.date.accessioned2021-05-06T06:40:14Z-
dc.date.available2021-05-06T06:40:14Z-
dc.date.issued2021-
dc.identifier.citationLuo, J. (2021). Generating adversarial examples with only one image. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/148573en_US
dc.identifier.urihttps://hdl.handle.net/10356/148573-
dc.description.abstractDeep learning based vision systems are widely deployed in today's world. The backbones of these systems, namely deep neural networks (DNNs), are showing an impressive capability on feature extraction, large-scale training, and precise predictions. However, DNNs have been shown vulnerable to adversarial examples of different types including adversarial perturbations and adversarial patches. Existing approaches for adversarial patch generation hardly consider the contextual consistency between patches and the image background, causing such patches to be easily detected and adversarial attacks to fail. Additionally, these methods require a large amount of data for training, which is computationally expensive and time-consuming. In this project, we explore how to generate advanced adversarial patches effectively and efficiently. To overcome the aforementioned challenges, we propose an approach to generate adversarial yet inconspicuous patches with one single image. In our approach, adversarial patches are produced in a coarse-to-fine way with multiple scales of generators and discriminators. We consider the perceptual sensitivity of victim model by highlighting its sensitivity to equip our approach with strong attacking capability. The selection of patch location is based on the perceptual sensitivity of victim models. Contextual information is encoded during the Min-Max training to make patches consistent with surroundings. Through extensive experiments, our approach shows strong attacking ability in both the white-box and black-box setting. Experiments on saliency detection and user evaluation indicate that our adversarial patches, which can evade human observations, are more inconspicuous and natural-looking compared to existed approaches. Lastly, the experiments on real-world objects shows that our digital approach has the potential of being malicious in real-world settings.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationSCSE20-0291en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleGenerating adversarial examples with only one imageen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorJun Zhaoen_US
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US
dc.contributor.researchComputational Intelligence Laben_US
dc.contributor.supervisoremailjunzhao@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP Report.pdf
  Restricted Access
3.47 MBAdobe PDFView/Open

Page view(s)

142
Updated on May 17, 2022

Download(s)

17
Updated on May 17, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.