Please use this identifier to cite or link to this item:
|Title:||Imperceptible misclassification attack on deep learning accelerator by glitch injection||Authors:||Liu, Wenye
|Keywords:||Engineering::Electrical and electronic engineering||Issue Date:||2020||Source:||Liu, W., Chang, C.-H., Zhang, F., & Lou, X. (2020). Imperceptible misclassification attack on deep learning accelerator by glitch injection. Proceedings of the 2020 57th ACM/IEEE Design Automation Conference (DAC). doi:10.1109/DAC18072.2020.9218577||Project:||MOE2018-T1-001-131 (RG87/18)||Abstract:||The convergence of edge computing and deep learning empowers endpoint hardwares or edge devices to perform inferences locally with the help of deep neural network (DNN) accelerator. This trend of edge intelligence invites new attack vectors, which are methodologically different from the well-known software oriented deep learning attacks like the input of adversarial examples. Current studies of threats on DNN hardware focus mainly on model parameters interpolation. Such kind of manipulation is not stealthy as it will leave non-erasable traces or create conspicuous output patterns. In this paper, we present and investigate an imperceptible misclassification attack on DNN hardware by introducing infrequent instantaneous glitches into the clock signal. Comparing with falsifying model parameters by permanent faults, corruption of targeted intermediate results of convolution layer(s) by disrupting associated computations intermittently leaves no trace. We demonstrated our attack on nine state-of-the-art ImageNet models running on Xilinx FPGA based deep learning accelerator. With no knowledge about the models, our attack can achieve over 98% misclassification on 8 out of 9 models with only 10% glitches launched into the computation clock cycles. Given the model details and inputs, all the test images applied to ResNet50 can be successfully misclassified with no more than 1.7% glitch injection.||URI:||https://hdl.handle.net/10356/145856||ISBN:||978-1-7281-1085-1||DOI:||10.1109/DAC18072.2020.9218577||Rights:||© 2020 Association for Computing Machinery (ACM). All rights reserved. This paper was published in 2020 57th ACM/IEEE Design Automation Conference (DAC) and is made available with permission of Association for Computing Machinery (ACM).||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Conference Papers|
Updated on Apr 18, 2021
Updated on Apr 18, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.