Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/155785
Title: Designing efficient DNNs via hardware-aware neural architecture search and beyond
Authors: Luo, Xiangzhong
Liu, Di
Huai, Shuo
Kong, Hao
Chen, Hui
Liu, Weichen
Keywords: Engineering::Computer science and engineering
Issue Date: 2021
Source: Luo, X., Liu, D., Huai, S., Kong, H., Chen, H. & Liu, W. (2021). Designing efficient DNNs via hardware-aware neural architecture search and beyond. IEEE Transactions On Computer-Aided Design of Integrated Circuits and Systems. https://dx.doi.org/10.1109/TCAD.2021.3100249
Project: MOE2019-T2-1-071
MOE2019-T1-001-072
M4082282
M4082087
Journal: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Abstract: Hardware systems integrated with deep neural networks (DNNs) are deemed to pave the way for future artificial intelligence (AI). However, manually designing efficient DNNs involves non-trivial computation resources since significant trial-and-errors are required to finalize the network configuration. To this end, we, in this paper, introduce a novel hardware-aware neural architecture search (NAS) framework, namely GoldenNAS, to automate the design of efficient DNNs. To begin with, we present a novel technique, called dynamic channel scaling, to enable the channel-level search since the number of channels has non-negligible impacts on both accuracy and efficiency. Besides, we introduce an efficient progressive space shrinking method to raise the awareness of the search space towards target hardware and alleviate the search overheads as well. Moreover, we propose an effective hardware performance modeling method to approximate the runtime latency of DNNs upon target hardware, which is further integrated into GoldenNAS to avoid the tedious on-device measurements. Then, we employ the evolutionary algorithm (EA) to search for the optimal operator/channel configurations of DNNs, denoted as GoldenNets. Finally, to enable the depthwise adaptiveness of GoldenNets under dynamic environments, we propose the adaptive batch normalization (ABN) technique, followed by the self-knowledge distillation (SKD) approach to improve the accuracy of adaptive sub-networks. We conduct extensive experiments directly on ImageNet, which clearly demonstrate the advantages of GoldenNAS over existing state-of-the-art approaches.
URI: https://hdl.handle.net/10356/155785
ISSN: 0278-0070
DOI: 10.1109/TCAD.2021.3100249
Rights: © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TCAD.2021.3100249.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Journal Articles

Files in This Item:
File Description SizeFormat 
manuscript-tcad.pdf7.87 MBAdobe PDFView/Open

Page view(s)

30
Updated on May 25, 2022

Download(s)

12
Updated on May 25, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.