Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYan, Xiaobeien_US
dc.contributor.authorLou, Xiaoxuanen_US
dc.contributor.authorXu, Guowenen_US
dc.contributor.authorQiu, Hanen_US
dc.contributor.authorGuo, Shangweien_US
dc.contributor.authorChang, Chip Hongen_US
dc.contributor.authorZhang, Tianweien_US
dc.identifier.citationYan, X., Lou, X., Xu, G., Qiu, H., Guo, S., Chang, C. H. & Zhang, T. (2023). Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator. 2023 International Conference on Field-Programmable Technology (ICFPT), 188-197.
dc.description.abstractDNN accelerators have been widely deployed in many scenarios to speed up the inference process and reduce the energy consumption. One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details. Such model extraction attacks can not only compromise the intellectual property of DNN models, but also facilitate some adversarial attacks. Although previous works have demonstrated a number of side-channel techniques to extract models from DNN accelerators, they are not practical for two reasons. (1) They only target simplified accelerator implementations, which have limited practicality in the real world. (2) They require heavy human analysis and domain knowledge. To overcome these limitations, this paper presents Mercury, the first automated remote side-channel attack against the off-the-shelf Nvidia DNN accelerator. The key insight of Mercury is to model the side-channel extraction process as a sequence-to-sequence problem. The adversary can leverage a time-to-digital converter (TDC) to remotely collect the power trace of the target model's inference. Then he uses a learning model to automatically recover the architecture details of the victim model from the power trace without any prior knowledge. The adversary can further use the attention mechanism to localize the leakage points that contribute most to the attack. Evaluation results indicate that Mercury can keep the error rate of model extraction below 1%.en_US
dc.description.sponsorshipCyber Security Agencyen_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.rights© 2023 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at
dc.subjectComputer and Information Scienceen_US
dc.titleMercury: an automated remote side-channel attack to Nvidia deep learning acceleratoren_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference2023 International Conference on Field-Programmable Technology (ICFPT)en_US
dc.description.versionSubmitted/Accepted versionen_US
dc.subject.keywordsProfiled Side-Channel Attacksen_US
dc.subject.keywordsDNN Acceleratoren_US
dc.subject.keywordsSequence-to-Sequence Learningen_US
dc.subject.keywordsModel Extractionen_US
dc.citation.conferencelocationYokohama, Japanen_US
dc.description.acknowledgementThis research is supported by National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity Research & Development Programme (Cyber-Hardware Forensic & Assurance Evaluation R&D Programme <NRF2018NCRNCR009-0001>), and MoE Tier 1 RS02/19.en_US
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
_DR_NTU_An_Automated_Remote_Side_channel_Attack_to_FPGA_based_DNN_Accelerators.pdf2.97 MBAdobe PDFThumbnail

Page view(s)

Updated on Jul 18, 2024


Updated on Jul 18, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.