Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/144346
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Sien_US
dc.contributor.authorLiu, Wenyeen_US
dc.contributor.authorChang, Chip-Hongen_US
dc.date.accessioned2020-10-29T06:33:02Z-
dc.date.available2020-10-29T06:33:02Z-
dc.date.issued2020-
dc.identifier.citationWang, S., Liu, W., & Chang, C.-H. (2020). Fired neuron rate based decision tree for detection of adversarial examples in DNNs. Proceedings of the 2020 IEEE International Symposium on Circuits and Systems (ISCAS). doi:10.1109/ISCAS45731.2020.9180476en_US
dc.identifier.urihttps://hdl.handle.net/10356/144346-
dc.description.abstractDeep neural network (DNN) is a prevalent machine learning solution to computer vision problems. The most criticized vulnerability of deep learning is its susceptibility towards adversarial images crafted by maliciously adding infinitesimal distortions to the benign inputs. Such negatives can fool a classifier. Existing countermeasures against these adversarial attacks are mainly developed based on software model of DNNs by using modified training during learning or modified input during testing, modifying networks or changing loss/activation functions, or relying on add-on models for classifying unseen examples. These approaches do not consider the optimization for hardware implementation of the learning models. In this paper, a new thresholding method is proposed based on comparators integrated into the most discriminative layers of the DNN determined by their layer-wise fired neuron rates between adversarial and normal inputs. Effectiveness of the method is validated on the ImageNet dataset with 8-bit truncated models for the state-of-the-art DNN architectures. A high detection rate of up to 98% with only 4.5% of false positive rate is achieved. The results show a significant improvement on both detection rate and false positive rate compared with previous countermeasures against the most practical non-invasive universal perturbation attack on deep learning based AI chip.en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationCHFA-GC1-AW01en_US
dc.relation.urihttps://doi.org/10.21979/N9/YPY0EBen_US
dc.rights© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/ISCAS45731.2020.9180476en_US
dc.subjectEngineering::Electrical and electronic engineering::Computer hardware, software and systemsen_US
dc.titleFired neuron rate based decision tree for detection of adversarial examples in DNNsen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.conference2020 IEEE International Symposium on Circuits and Systems (ISCAS)en_US
dc.contributor.researchVIRTUS, IC Design Centre of Excellenceen_US
dc.identifier.doi10.1109/ISCAS45731.2020.9180476-
dc.description.versionAccepted versionen_US
dc.subject.keywordsDeep Learning Securityen_US
dc.subject.keywordsAdversarial Attacken_US
dc.citation.conferencelocationSeville, Spainen_US
dc.description.acknowledgementThis research is supported by the National Research Foundation, Singapore, under its National Cybersecurity Research & Development Programme / Cyber-Hardware Forensic & Assurance Evaluation R&D Programme (Award: CHFA-GC1-AW01).en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:EEE Conference Papers
Files in This Item:
File Description SizeFormat 
PID6471825.pdfFired Neuron Rate Based Decision Tree for Detection of Adversarial Examples in DNNs447.52 kBAdobe PDFThumbnail
View/Open

Page view(s)

236
Updated on Mar 22, 2023

Download(s) 50

91
Updated on Mar 22, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.