Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/171839
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYan, Xiaobeien_US
dc.contributor.authorLou, Xiaoxuanen_US
dc.contributor.authorXu, Guowenen_US
dc.contributor.authorQiu, Hanen_US
dc.contributor.authorGuo, Shangweien_US
dc.contributor.authorChang, Chip Hongen_US
dc.contributor.authorZhang, Tianweien_US
dc.date.accessioned2023-12-28T07:54:23Z-
dc.date.available2023-12-28T07:54:23Z-
dc.date.issued2023-
dc.identifier.citationYan, X., Lou, X., Xu, G., Qiu, H., Guo, S., Chang, C. H. & Zhang, T. (2023). Mercury: an automated remote side-channel attack to Nvidia deep learning accelerator. 2023 International Conference on Field-Programmable Technology (ICFPT), 188-197. https://dx.doi.org/10.1109/ICFPT59805.2023.00026en_US
dc.identifier.isbn979-8-3503-5911-4-
dc.identifier.issn2837-0449-
dc.identifier.urihttps://hdl.handle.net/10356/171839-
dc.description.abstractDNN accelerators have been widely deployed in many scenarios to speed up the inference process and reduce the energy consumption. One big concern about the usage of the accelerators is the confidentiality of the deployed models: model inference execution on the accelerators could leak side-channel information, which enables an adversary to preciously recover the model details. Such model extraction attacks can not only compromise the intellectual property of DNN models, but also facilitate some adversarial attacks. Although previous works have demonstrated a number of side-channel techniques to extract models from DNN accelerators, they are not practical for two reasons. (1) They only target simplified accelerator implementations, which have limited practicality in the real world. (2) They require heavy human analysis and domain knowledge. To overcome these limitations, this paper presents Mercury, the first automated remote side-channel attack against the off-the-shelf Nvidia DNN accelerator. The key insight of Mercury is to model the side-channel extraction process as a sequence-to-sequence problem. The adversary can leverage a time-to-digital converter (TDC) to remotely collect the power trace of the target model's inference. Then he uses a learning model to automatically recover the architecture details of the victim model from the power trace without any prior knowledge. The adversary can further use the attention mechanism to localize the leakage points that contribute most to the attack. Evaluation results indicate that Mercury can keep the error rate of model extraction below 1%.en_US
dc.description.sponsorshipCyber Security Agencyen_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationNRF2018NCRNCR009-0001en_US
dc.relationRS02/19en_US
dc.rights© 2023 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/ICFPT59805.2023.00026.en_US
dc.subjectComputer and Information Scienceen_US
dc.titleMercury: an automated remote side-channel attack to Nvidia deep learning acceleratoren_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference2023 International Conference on Field-Programmable Technology (ICFPT)en_US
dc.identifier.doi10.1109/ICFPT59805.2023.00026-
dc.description.versionSubmitted/Accepted versionen_US
dc.identifier.urlhttps://fpt2023.org/index.html-
dc.identifier.spage188en_US
dc.identifier.epage197en_US
dc.subject.keywordsProfiled Side-Channel Attacksen_US
dc.subject.keywordsDNN Acceleratoren_US
dc.subject.keywordsSequence-to-Sequence Learningen_US
dc.subject.keywordsFPGAen_US
dc.subject.keywordsModel Extractionen_US
dc.citation.conferencelocationYokohama, Japanen_US
dc.description.acknowledgementThis research is supported by National Research Foundation, Singapore, and Cyber Security Agency of Singapore under its National Cybersecurity Research & Development Programme (Cyber-Hardware Forensic & Assurance Evaluation R&D Programme <NRF2018NCRNCR009-0001>), and MoE Tier 1 RS02/19.en_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
_DR_NTU_An_Automated_Remote_Side_channel_Attack_to_FPGA_based_DNN_Accelerators.pdf2.97 MBAdobe PDFThumbnail
View/Open

Page view(s)

136
Updated on Jul 18, 2024

Download(s)

43
Updated on Jul 18, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.