Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/147509
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Piyasena, Duvindu | en_US |
dc.contributor.author | Wickramasinghe, Rukshan | en_US |
dc.contributor.author | Paul, Debdeep | en_US |
dc.contributor.author | Lam, Siew-Kei | en_US |
dc.contributor.author | Wu, Meiqing | en_US |
dc.date.accessioned | 2021-04-19T03:27:20Z | - |
dc.date.available | 2021-04-19T03:27:20Z | - |
dc.date.issued | 2019 | - |
dc.identifier.citation | Piyasena, D., Wickramasinghe, R., Paul, D., Lam, S. & Wu, M. (2019). Lowering dynamic power of a stream-based CNN hardware accelerator. 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP), 1-6. https://dx.doi.org/10.1109/MMSP.2019.8901777 | en_US |
dc.identifier.isbn | 9781728118178 | - |
dc.identifier.uri | https://hdl.handle.net/10356/147509 | - |
dc.description.abstract | Custom hardware accelerators of Convolutional Neural Networks (CNN) provide a promising solution to meet real-time constraints for a wide range of applications on low-cost embedded devices. In this work, we aim to lower the dynamic power of a stream-based CNN hardware accelerator by reducing the computational redundancies in the CNN layers. In particular, we investigate the redundancies due to the downsampling effect of max pooling layers which are prevalent in state-of-the-art CNNs, and propose an approximation method to reduce the overall computations. The experimental results show that the proposed method leads to lower dynamic power without sacrificing accuracy. | en_US |
dc.description.sponsorship | National Research Foundation (NRF) | en_US |
dc.language.iso | en | en_US |
dc.relation | TUM CREATE | en_US |
dc.rights | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/MMSP.2019.8901777 | en_US |
dc.subject | Engineering::Computer science and engineering::Hardware::Register-transfer-level implementation | en_US |
dc.subject | Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision | en_US |
dc.title | Lowering dynamic power of a stream-based CNN hardware accelerator | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.contributor.conference | 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP) | en_US |
dc.contributor.research | Hardware & Embedded Systems Lab (HESL) | en_US |
dc.identifier.doi | 10.1109/MMSP.2019.8901777 | - |
dc.description.version | Accepted version | en_US |
dc.identifier.scopus | 2-s2.0-85075739729 | - |
dc.identifier.spage | 1 | en_US |
dc.identifier.epage | 6 | en_US |
dc.subject.keywords | FPGA | en_US |
dc.subject.keywords | Convolutional Neural Networks | en_US |
dc.citation.conferencelocation | Kuala Lumpur, Malaysia | en_US |
dc.description.acknowledgement | This research project is funded by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme with the Technical University of Munich at TUMCREATE. | en_US |
item.grantfulltext | open | - |
item.fulltext | With Fulltext | - |
Appears in Collections: | SCSE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2019_mmsp_Lowering Dynamic Power of a Stream-based CNN Hardware Accelerator.pdf | 266.51 kB | Adobe PDF | View/Open |
SCOPUSTM
Citations
50
3
Updated on Jan 28, 2023
Page view(s)
185
Updated on Feb 1, 2023
Download(s) 50
69
Updated on Feb 1, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.