Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/178546
Title: | Layer sequence extraction of optimized DNNs using side-channel information leaks | Authors: | Sun, Yidan Jiang, Guiyuan Liu, Xinwang He, Peilan Lam, Siew-Kei |
Keywords: | Computer and Information Science | Issue Date: | 2024 | Source: | Sun, Y., Jiang, G., Liu, X., He, P. & Lam, S. (2024). Layer sequence extraction of optimized DNNs using side-channel information leaks. IEEE Transactions On Computer-Aided Design of Integrated Circuits and Systems. https://dx.doi.org/10.1109/TCAD.2024.3389554 | Project: | NTU-DESAY SV 2018-0980 MOE-T2EP20121-0008 |
Journal: | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems | Abstract: | Deep Neural Network (DNN) Intellectual Property (IP) models must be kept undisclosed to avoid revealing trade secrets. Recent works have devised machine learning techniques that leverage on side-channel information leakage of the target platform to reverse engineer DNN architectures. However, these works fail to perform successful attacks on DNNs that have undergone performance optimizations (i.e., operator fusion) using DNN compilers, e.g., Apache Tensor Virtual Machine (TVM). We propose a two-phase attack framework to infer the layer sequences of optimized DNNs through side-channel information leakage. In the first phase, we use a recurrent network with multi-head attention components to learn the intra and interlayer fusion patterns from GPU traces of TVM-optimized DNNs, in order to accurately predict the operation distribution. The second phase uses a model to learn the run-time temporal correlations between operations and layers, which enables the prediction of layer sequence. An encoding strategy is proposed to overcome the convergence issues faced by existing learning-based methods when inferring the layer sequences of optimized DNNs. Extensive experiments show that our learning-based framework outperforms state-of-the-art DNN model extraction techniques. Our framework is also the first to effectively reverse engineer both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) using side-channel leakage. | URI: | https://hdl.handle.net/10356/178546 | ISSN: | 0278-0070 | DOI: | 10.1109/TCAD.2024.3389554 | Schools: | School of Computer Science and Engineering College of Computing and Data Science |
Research Centres: | Cyber Security Research Centre @ NTU (CYSREN) | Rights: | © 2024 IEEE. All rights reserved. This article may be downloaded for personal use only. Any other use requires prior permission of the copyright holder. The Version of Record is available online at http://doi.org/10.1109/TCAD.2024.3389554. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | CCDS Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Layer_Sequence_Extraction_of_Optimized_DNNs_Using_Side-Channel_Information_Leaks.pdf | 7.88 MB | Adobe PDF | View/Open |
Page view(s)
82
Updated on Mar 27, 2025
Download(s) 50
63
Updated on Mar 27, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.