2022 4th International Conference on Natural Language Processing (ICNLP)
Download PDF

Abstract

With the continuous deepening of neural network application research, the amount of data has grown rapidly. Although most of the calculations related to the field of machine learning are still completed under the traditional processor architecture, for embedded systems using neural networks, traditional Processors have struggled to meet the requirements of big data processing. This paper proposes to use the in-memory computing architecture as a new type of processor architecture to solve the problem of additional energy consumption caused by the “memory wall” problem under the von Neumann architecture, so as to achieve a better acceleration effect on neural network algorithms. This paper is based on three types of memory, including Static Random-Access Memory (SRAM), Resistive Random-Access Memory (ReRAM), and Ferro-electric Field Effect Transistor (FeFET). Related research has been done on in-memory computing neural network accelerator architectures. Mainly use VGG-8, VGG-16, AlexNet and other networks to complete the verification of the in-memory computing architecture based on SRAM, ReRAM, FeFET three kinds of memories on three different data sets, from the acceleration effect of each layer and the use of each layer The performance of the designed accelerator is evaluated by multiple indicators such as the number of Tile units, read and write delay, read and write energy consumption, system occupied area, and energy efficiency ratio. The experimental results show that the neural network accelerator based on in-memory computing designed in this paper has reached a relatively advanced level.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles