Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/137128
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Sien_US
dc.contributor.authorLiu, Wenyeen_US
dc.contributor.authorChang, Chip-Hongen_US
dc.date.accessioned2020-03-02T01:24:12Z-
dc.date.available2020-03-02T01:24:12Z-
dc.date.issued2019-
dc.identifier.citationWang, S., Liu, W., & Chang, C.-H. (2019). Detecting adversarial examples for deep neural networks via layer directed discriminative noise injection. 2019 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). doi:10.1109/AsianHOST47458.2019.9006702en_US
dc.identifier.urihttps://hdl.handle.net/10356/137128-
dc.description.abstractDeep learning is a popular powerful machine learning solution to the computer vision tasks. The most criticized vulnerability of deep learning is its poor tolerance towards adversarial images obtained by deliberately adding imperceptibly small perturbations to the clean inputs. Such negatives can delude a classifier into wrong decision making. Previous defensive techniques mostly focused on refining the models or input transformation. They are either implemented only with small datasets or shown to have limited success. Furthermore, they are rarely scrutinized from the hardware perspective despite Artificial Intelligence (AI) on a chip is a roadmap for embedded intelligence everywhere. In this paper we propose a new discriminative noise injection strategy to adaptively select a few dominant layers and progressively discriminate adversarial from benign inputs. This is made possible by evaluating the differences in label change rate from both adversarial and natural images by injecting different amount of noise into the weights of individual layers in the model. The approach is evaluated on the ImageNet Dataset with 8-bit truncated models for the state-of-the-art DNN architectures. The results show a high detection rate of up to 88.00% with only approximately 5% of false positive rate for MobileNet. Both detection rate and false positive rate have been improved well above existing advanced defenses against the most practical noninvasive universal perturbation attack on deep learning based AI chip.en_US
dc.description.sponsorshipMOE (Min. of Education, S’pore)en_US
dc.language.isoenen_US
dc.relation.urihttps://doi.org/10.21979/N9/WCIL7Xen_US
dc.rights© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/AsianHOST47458.2019.9006702en_US
dc.subjectEngineering::Electrical and electronic engineering::Integrated circuitsen_US
dc.titleDetecting adversarial examples for deep neural networks via layer directed discriminative noise injectionen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.conference2019 IEEE Asian Hardware Oriented Security and Trust Symposiumen_US
dc.identifier.doi10.1109/AsianHOST47458.2019.9006702-
dc.description.versionAccepted versionen_US
dc.subject.keywordsMachine Learning Securityen_US
dc.subject.keywordsAdversarial Attacken_US
dc.citation.conferencelocationXi'an, Chinaen_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:EEE Conference Papers
Files in This Item:
File Description SizeFormat 
FinalVersion_LayerDirected_AsianHOST2019.pdf424.19 kBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 50

4
Updated on Mar 8, 2023

Page view(s)

227
Updated on Mar 20, 2023

Download(s) 50

106
Updated on Mar 20, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.