Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/137659
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDong, Xinen_US
dc.contributor.authorChen, Shangyuen_US
dc.contributor.authorPan, Sinno Jialinen_US
dc.date.accessioned2020-04-08T01:57:01Z-
dc.date.available2020-04-08T01:57:01Z-
dc.date.issued2017-
dc.identifier.citationDong, X., Chen, S., & Pan, S. J. (2017). Learning to prune deep neural networks via layer-wise optimal brain surgeon. Proceedings of 31st Conference on Neural Information Processing Systems (NIPS 2017).en_US
dc.identifier.urihttps://hdl.handle.net/10356/137659-
dc.description.abstractHow to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. By controlling layer-wise errors properly, one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods. Codes of our work are released at: https://github.com/csyhhu/L-OBS.en_US
dc.description.sponsorshipMOE (Min. of Education, S’pore)en_US
dc.language.isoenen_US
dc.rights© 2017 Neural Information Processing Systems. All rights reserved. This paper was published in Proceedings of 31st Conference on Neural Information Processing Systems and is made available with permission of Neural Information Processing Systems.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleLearning to prune deep neural networks via layer-wise optimal brain surgeonen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conference31st Conference on Neural Information Processing Systems (NIPS 2017)en_US
dc.description.versionPublished versionen_US
dc.identifier.arxiv1705.07565-
dc.subject.keywordsDeep Neural Networksen_US
dc.subject.keywordsLayer-wise Optimal Brain Surgeonen_US
dc.citation.conferencelocationCA, USAen_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
Learning to prune deep neural networks via layer-wise optimal brain surgeon.pdf451.25 kBAdobe PDFThumbnail
View/Open

Page view(s)

144
Updated on Mar 30, 2023

Download(s) 50

85
Updated on Mar 30, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.