Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/155572
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHuai, Shuoen_US
dc.contributor.authorZhang, Leien_US
dc.contributor.authorLiu, Dien_US
dc.contributor.authorLiu, Weichenen_US
dc.contributor.authorSubramaniam, Ravien_US
dc.date.accessioned2022-03-11T01:10:53Z-
dc.date.available2022-03-11T01:10:53Z-
dc.date.issued2021-
dc.identifier.citationHuai, S., Zhang, L., Liu, D., Liu, W. & Subramaniam, R. (2021). ZeroBN : learning compact neural networks for latency-critical edge systems. 2021 58th ACM/IEEE Design Automation Conference (DAC), 151-156. https://dx.doi.org/10.1109/DAC18074.2021.9586309en_US
dc.identifier.isbn9781665432740-
dc.identifier.urihttps://hdl.handle.net/10356/155572-
dc.description.abstractEdge devices have been widely adopted to bring deep learning applications onto low power embedded systems, mitigating the privacy and latency issues of accessing cloud servers. The increasingly computational demand of complex neural network models leads to large latency on edge devices with limited resources. Many application scenarios are real-time and have a strict latency constraint, while conventional neural network compression methods are not latency-oriented. In this work, we propose a novel compact neural networks training method to reduce the model latency on latency-critical edge systems. A latency predictor is also introduced to guide and optimize this procedure. Coupled with the latency predictor, our method can guarantee the latency for a compact model by only one training process. The experiment results show that, compared to state-of-the-art model compression methods, our approach can well-fit the 'hard' latency constraint by significantly reducing the latency with a mild accuracy drop. To satisfy a 34ms latency constraint, we compact ResNet-50 with 0.82% of accuracy drop. And for GoogLeNet, we can even increase the accuracy by 0.3%en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationI1801E0028en_US
dc.relation.uri10.21979/N9/IRNJ4Ien_US
dc.rights©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/DAC18074.2021.9586309.en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleZeroBN : learning compact neural networks for latency-critical edge systemsen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference2021 58th ACM/IEEE Design Automation Conference (DAC)en_US
dc.contributor.researchHP-NTU Digital Manufacturing Corporate Laben_US
dc.identifier.doi10.1109/DAC18074.2021.9586309-
dc.description.versionSubmitted/Accepted versionen_US
dc.identifier.scopus2-s2.0-85119416118-
dc.identifier.spage151en_US
dc.identifier.epage156en_US
dc.subject.keywordsZeroBNen_US
dc.subject.keywordsCompact Learningen_US
dc.citation.conferencelocationSan Francisco, CA, USAen_US
dc.description.acknowledgementThis research was conducted in collaboration with HP Inc. and supported by National Research Foundation (NRF) Singapore and the Singapore Government through the Industry Alignment Fund - Industry Collaboration Projects Grant (I1801E0028).en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
ZeroBN_Accept_Version.pdf1.15 MBAdobe PDFView/Open

SCOPUSTM   
Citations 50

1
Updated on Jul 9, 2022

Page view(s)

69
Updated on Sep 28, 2022

Download(s)

16
Updated on Sep 28, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.