Please use this identifier to cite or link to this item:
Title: Securing edge deep neural network against input evasion and IP theft
Authors: Wang, Si
Keywords: Engineering::Electrical and electronic engineering::Integrated circuits
Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2021
Publisher: Nanyang Technological University
Source: Wang, S. (2021). Securing edge deep neural network against input evasion and IP theft. Doctoral thesis, Nanyang Technological University, Singapore.
Project: MOE-2015-T2-2-013
Abstract: Deep learning is a key driver that puts artificial intelligence (AI) on the radar screen for technology investment. Deep Neural Network (DNN) automatically learns high-level features directly from raw data in a hierarchical manner, which eliminates manual extraction of effective features in traditional machine learning solutions. The ability to solve problems end-to-end enables a system to learn complex functions mapping for ill-posed problems with prediction accuracy often exceeding sophisticated statistical models and other machine learning methods. Computer vision is one specific domain that DNN has demonstrated this remarkable abstraction power. Unlike other mainstream classification approaches, DNN can usually achieve better results with more data and larger model. Over the last decade, the model complexity and regulation mechanism of DNN have grown tremendously to overcome the performance plateau and improve the generalization ability. The flourishing of Internet of Things (IoT) has changed the way data are generated and curated. Consequently, DNN hardware accelerators, open-source AI model compilers and commercially available toolkits like Intel(R) OpenVINO(TM), have evolved to enable more user-centric deep learning applications to be run on edge devices without being limited by the network latency.This research is motivated by the two major security threats of deep learning. One is the adversarial example obtained by deliberately adding imperceptibly small perturbations onto the benign input. Such input evasion can delude a well-trained classifier into wrong decision making. Adversarial examples can be generated fast at low cost. Their attack surface can also be extended beyond the software boundary and made more robust with high transferability across models. Existing countermeasures against adversarial examples are mainly designed and evaluated based on software models of DNNs implemented with 32-bit floating-point arithmetic. To support secure embedded intelligence, the defense should take hardware optimization and resource constraints of edge platforms into consideration. The other threat is the Intellectual Property (IP) theft. As training a good DNN model needs huge capital investment in manpower, time and physical resources, which may not be affordable or accessible by small corporations, the trained model is often a pricey proprietary asset of a business and is normally kept confidential. However, the emerging model extraction attacks and reverse engineering techniques enable the DNN model to be stolen to build similar quality AI product at a low cost. In order to protect the interest and revenue of the model owner, a pragmatic solution without reverse engineering the DNN chip to detect the pirated AI chip is required. A comprehensive review of DNN has been conducted, which highlights the prevalent adversarial input generation methodologies and IP theft techniques, and corresponding countermeasures. Three original contributions, including two hardware-oriented approaches, a new lightweight in-situ adversarial input detector for edge DNN, and a method for fingerprinting DNN to attest the model ownership, are presented in this thesis.
DOI: 10.32657/10356/152267
Rights: This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
ws017_Amended_thesis_G1603609L.pdf6.62 MBAdobe PDFView/Open

Page view(s)

Updated on May 15, 2022

Download(s) 50

Updated on May 15, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.