Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/171442
Title: | Adder and multiplier design for computing-in-memory architecture based on ReRAM | Authors: | Zheng, Buyun | Keywords: | Engineering::Electrical and electronic engineering::Integrated circuits | Issue Date: | 2023 | Publisher: | Nanyang Technological University | Source: | Zheng, B. (2023). Adder and multiplier design for computing-in-memory architecture based on ReRAM. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/171442 | Abstract: | We live in an era where the semiconductor and computer industries are developing rapidly. It can find that these two industries are mutually reinforcing. Better performance chips provide hardware conditions for more powerful applications, while the constantly evolving computer and internet fields have put forward greater demand for the semiconductor industry, which has provided great impetus and market for the semiconductor industry. In today's computer field, the speed of development of technologies has exceeded many people's imagination. For example, artificial intelligence (AI) is greatly changing people's lives. This type of technology and product requires chips that can perform fast and massive calculations as the hardware foundation, but unfortunately, currently powerful general-purpose computing chips are not cheap. So various new chip architectures were proposed. One of them is the computation in memory (CIM) architecture. Compared to processing chips like CPU, memory is too cheap and indispensable in massive computing applications. If memory chips can be used and simple computing functions such as addition, multiplication, multiply and accumulate (MAC), etc. can be implemented, the cost of the application can be greatly reduced. On the other hand, it can break through the bottleneck of the "storage wall" in the current von Neumann architecture. It can reduce the time and power consumption of the process of reading data from main memory to the CPU and then storing it in main memory after completing the calculation. And if a large number of storage units can perform calculations in parallel, it can improve their throughput and efficiency. From this, it can be seen that CIM is indeed a relatively ideal solution. So far, there have been significant breakthroughs in CIM related research. However, in most cases, this structure is used for MAC operations used in convolutional neural network (CNN) related research, lacking flexibility and universality, and cannot completely replace general processing chips such as CPU and GPU. In the process of continuous architecture innovation, memory devices that store different materials are also developing. Resistive random access memory (ReRAM), with its advantages of integration, speed, non-volatility, and compatibility with CMOS, has become the promising device for the next generation of memory. In this essay, a CIM architecture based on ReRAM is proposed. By utilizing the properties of the ReRAM device itself and adopting a different logic from CMOS circuits, it can achieve 8-bit addition and multiplication operations with fewer resources and integrate them into the storage array. Designed its analog hardware circuit and digital control logic, and conducted simulation. I hope this essay can serve as a way of thinking that in the future, there may be more universal and processable CIM architectures that can have wider applications. | URI: | https://hdl.handle.net/10356/171442 | Schools: | School of Electrical and Electronic Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Theses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Adder and Multiplier Design for Computing-in-memory Architecture Based on ReRAM.pdf Restricted Access | 3.34 MB | Adobe PDF | View/Open |
Page view(s)
229
Updated on Mar 26, 2025
Download(s)
17
Updated on Mar 26, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.