IEEE Transactions on Artificial Intelligence

Download PDF

Keywords

Machine Learning, Hardware Acceleration, Throughput, Costs, Artificial Intelligence, Power Demand, Performance Evaluation, Artificial Intelligence AI, Artificial Neural Network ANN, Convolutional Neural Network CNN, Hardware Accelerator, Machine Learning, Machine Learning Design, Machine Learning On Chip, Neural Network, Recurrent Neural Network RNN, Neural Network, Hardware Accelerators, High Performance, Machine Learning, Convolutional Neural Network, High Speed, Artificial Neural Network, Recurrent Neural Network, Learning Performance, Resource Consumption, Hardware Implementation, Deep Neural Network, Energy Efficiency, Convolutional Layers, Power Consumption, Parallelization, Long Short Term Memory, Graphics Processing Unit, Speech Recognition, Low Latency, Hardware Components, Fewer Units, Graph Convolutional Network, MNIST Dataset, Digital Signal Processing, Off Chip Memory

Abstract

Artificial intelligence (AI) hardware accelerator is an emerging research for several applications and domains. The hardware accelerator's direction is to provide high computational speed with retaining low-cost and high learning performance. The main challenge is to design complex machine learning models on hardware with high performance. This article presents a thorough investigation into machine learning accelerators and associated challenges. It describes a hardware implementation of different structures such as convolutional neural network (CNN), recurrent neural network (RNN), and artificial neural network (ANN). The challenges such as speed, area, resource consumption, and throughput are discussed. It also presents a comparison between the existing hardware design. Last, the article describes the evaluation parameters for a machine learning accelerator in terms of learning and testing performance and hardware design.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles