Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/155784
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLuo, Xiangzhongen_US
dc.contributor.authorLiu, Dien_US
dc.contributor.authorHuai, Shuoen_US
dc.contributor.authorLiu, Weichenen_US
dc.date.accessioned2022-03-22T01:31:04Z-
dc.date.available2022-03-22T01:31:04Z-
dc.date.issued2021-
dc.identifier.citationLuo, X., Liu, D., Huai, S. & Liu, W. (2021). HSCoNAS : hardware-software co-design of efficient DNNs via neural architecture search. 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). https://dx.doi.org/10.23919/DATE51398.2021.9473937en_US
dc.identifier.urihttps://hdl.handle.net/10356/155784-
dc.description.abstractIn this paper, we present a novel multi-objective hardware-aware neural architecture search (NAS) framework, namely HSCoNAS, to automate the design of deep neural networks (DNNs) with high accuracy but low latency upon target hardware. To accomplish this goal, we first propose an effective hardware performance modeling method to approximate the runtime latency of DNNs on target hardware, which will be integrated into HSCoNAS to avoid the tedious on-device measurements. Besides, we propose two novel techniques, \textit{i.e.}, dynamic channel scaling to maximize the accuracy under the specified latency and progressive space shrinking to refine the search space towards target hardware as well as alleviate the search overheads. These two techniques jointly work to allow HSCoNAS to perform fine-grained and efficient explorations. Finally, an evolutionary algorithm (EA) is incorporated to conduct the architecture search. Extensive experiments on ImageNet are conducted upon diverse target hardware, \textit{i.e.}, GPU, CPU, and edge device to demonstrate the superiority of HSCoNAS over recent state-of-the-art approaches.en_US
dc.description.sponsorshipMinistry of Education (MOE)en_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.language.isoenen_US
dc.relationMOE2019-T2-1-071en_US
dc.relationMOE2019-T1-001-072en_US
dc.relationM4082282en_US
dc.relationM4082087en_US
dc.rights© 2021 EDAA, published by IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.23919/DATE51398.2021.9473937.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleHSCoNAS : hardware-software co-design of efficient DNNs via neural architecture searchen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference2021 Design, Automation & Test in Europe Conference & Exhibition (DATE)en_US
dc.contributor.researchHP-NTU Digital Manufacturing Corporate Laben_US
dc.identifier.doi10.23919/DATE51398.2021.9473937-
dc.description.versionSubmitted/Accepted versionen_US
dc.subject.keywordsPerformance Evaluationen_US
dc.subject.keywordsRuntimeen_US
dc.citation.conferencelocationGrenoble, Franceen_US
dc.description.acknowledgementThis work is partially supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (MOE2019-T2-1-071) and Tier 1 (MOE2019-T1-001-072), and partially supported by Nanyang Technological University, Singapore, under its NAP (M4082282) and SUG (M4082087).en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
manuscript.pdf1.5 MBAdobe PDFView/Open

SCOPUSTM   
Citations 20

4
Updated on Jul 9, 2022

Web of ScienceTM
Citations 50

2
Updated on Sep 29, 2022

Page view(s)

49
Updated on Sep 29, 2022

Download(s)

10
Updated on Sep 29, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.