Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/165388
Title: | SurgeNAS: a comprehensive surgery on hardware-aware differentiable neural architecture search | Authors: | Luo, Xiangzhong Liu, Di Kong, Hao Huai, Shuo Chen, Hui Liu, Weichen |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2022 | Source: | Luo, X., Liu, D., Kong, H., Huai, S., Chen, H. & Liu, W. (2022). SurgeNAS: a comprehensive surgery on hardware-aware differentiable neural architecture search. IEEE Transactions On Computers, 72(4), 1081-1094. https://dx.doi.org/10.1109/TC.2022.3188175 | Project: | MOE2019-T2-1-071 MOE2019-T1- 001-072 NAP (M4082282 SUG (M4082087) |
Journal: | IEEE Transactions on Computers | Abstract: | Differentiable neural architecture search (NAS) is an emerging paradigm to automate the design of top-performing convolutional neural networks (CNNs). However, previous differentiable NAS methods suffer from several crucial weaknesses, such as inaccurate gradient estimation, high memory consumption, search fairness, etc. More importantly, previous differentiable NAS works are mostly hardware-agnostic since they only search for CNNs in terms of accuracy, ignoring other critical performance metrics like latency. In this work, we introduce a novel hardware-aware differentiable NAS framework, namely SurgeNAS, in which we leverage the one-level optimization to avoid inaccuracy in gradient estimation. To this end, we propose an effective identity mapping regularization to alleviate the over-selecting issue. Besides, to mitigate the memory bottleneck, we propose an ordered differentiable sampling approach, which significantly reduces the search memory consumption to the single-path level, thereby allowing to directly search on target tasks instead of small proxy tasks. Meanwhile, it guarantees the strict search fairness. Moreover, we introduce a graph neural networks (GNNs) based predictor to approximate the on-device latency, which is further integrated into SurgeNAS to enable the latency-aware architecture search. Finally, we analyze the resource underutilization issue, in which we propose to scale up the searched SurgeNets within \textit{Comfort Zone} to balance the computation and memory access, which brings considerable accuracy improvement without deteriorating the execution efficiency. Extensive experiments are conducted on ImageNet with diverse hardware platforms, which clearly show the effectiveness of SurgeNAS in terms of accuracy, latency, and search efficiency. | URI: | https://hdl.handle.net/10356/165388 | ISSN: | 0018-9340 | DOI: | 10.1109/TC.2022.3188175 | DOI (Related Dataset): | 10.21979/N9/Y2TO6G | Schools: | School of Computer Science and Engineering | Research Centres: | HP-NTU Digital Manufacturing Corporate Lab | Rights: | © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TC.2022.3188175. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
manuscript-tc.pdf | 10.54 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
50
9
Updated on Mar 18, 2025
Page view(s)
219
Updated on Mar 18, 2025
Download(s) 10
422
Updated on Mar 18, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.