Please use this identifier to cite or link to this item:
Title: iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
Authors: Zhu, Shien
Li, Shiqing
Liu, Weichen
Keywords: Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2022
Source: Zhu, S., Li, S. & Liu, W. (2022). iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations. 32nd Great Lakes Symposium on VLSI 2022 (GLSVLSI '22), June 2022, 65-70.
Project: MOE2019-T2-1-071 
Conference: 32nd Great Lakes Symposium on VLSI 2022 (GLSVLSI '22)
Abstract: Adder Neural Network (AdderNet) is a new type of Convolutional Neural Networks (CNNs) that replaces the computational-intensive multiplications in convolution layers with lightweight additions and subtractions. As a result, AdderNet preserves high accuracy with adder convolution kernels and achieves high speed and power efficiency. In-Memory Computing (IMC) is known as the next-generation artificial-intelligence computing paradigm that has been widely adopted for accelerating binary and ternary CNNs. As AdderNet has much higher accuracy than binary and ternary CNNs, accelerating AdderNet using IMC can obtain both performance and accuracy benefits. However, existing IMC devices have no dedicated subtraction function, and adding subtraction logic may bring larger area, higher power, and degraded addition performance. In this paper, we propose iMAD as an in-memory accelerator for AdderNet with efficient addition and subtraction operations. First, we propose an efficient in-memory subtraction operator at the circuit level and co-optimize the addition performance to reduce the latency and power. Second, we propose an accelerator architecture for AdderNet with high parallelism based on the optimized operators. Third, we propose an IMC-friendly computation pipeline for AdderNet convolution at the algorithm level to further boost the performance. Evaluation results show that our accelerator iMAD achieves 3.25X speedup and 3.55X energy efficiency compared with a state-of-the-art in-memory accelerator.
ISBN: 978-1-4503-9322-5
DOI: 10.1145/3526241.3530313
DOI (Related Dataset): 10.21979/N9/JNFW9P
Schools: School of Computer Science and Engineering 
Research Centres: Parallel and Distributed Computing Centre 
Rights: © 2022 Association for Computing Machinery. All rights reserved. This paper was published in Proceedings of 32nd Great Lakes Symposium on VLSI 2022 (GLSVLSI '22) and is made available with permission of Association for Computing Machinery.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Conference Papers

Files in This Item:
File Description SizeFormat 
GLSVLSI_2022_iMAD_Zhu Shien_Accepted Version 2022-4-21.pdf1.23 MBAdobe PDFThumbnail

Citations 50

Updated on Feb 4, 2024

Page view(s)

Updated on Feb 27, 2024

Download(s) 50

Updated on Feb 27, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.