Call for Papers: AI for Hardware and Hardware for AI

IEEE Micro seeks submissions for this upcoming special issue.
Share this on:
Submissions Due: 25 April 2025

Submissions due: April 25, 2025
Publication: Nov/Dec 2025


For years, the computational landscape, stretching from data centers and supercomputers to simple home devices, has predominantly depended on general-purpose processors which were sustainable while Moore’s law guaranteed that chip transistor counts would double approximately every two years. Today, however, as the pace of Moore’s law decelerates, we have witnessed an increasing shift toward hardware accelerators, designed to efficiently utilize hardware resources by concentrating solely on implementing the specific demands of target applications. Hardware accelerators, primarily engineered for an array of AI applications, from computer vision to recommendation systems and natural language processing, have been gaining growing traction, with substantial industrial investments and increasing scholarly interest. While the shift toward hardware accelerators has proven their capabilities, they face new challenges with major AI growth. AI algorithms are not only scaling in size rapidly but also evolving at an accelerated rate. The scale and diversity in modern AI pose a substantial challenge in the design of hardware accelerators for them. As a result, this IEEE Micro Special Issue seeks articles not only related to the hardware accelerators for the next generation of AI but also to the exploration of how AI itself can facilitate the creation of cost-efficient, fast, and scalable hardware. This issue’s topics of interest include, but are not limited to:

 

  • Scalable hardware accelerators for the next generation of large AI models 
  • Deploying new technologies (e.g., in-memory computing, photonics, analog computing) for AI efficiency
  • Sparsity-aware optimizations techniques for efficient AI 
  • Integration of AI techniques to expedite the hardware/software co-design
  • Rethinking the software/hardware stack for heterogeneous AI accelerator systems 
  • Interconnection networks and data movement optimizations for the future of AI
  • Using AI methods to enhance the reliability of hardware accelerators, design validation, and architecture front-end and backend
  • Investigating security and privacy challenges in AI-assisted hardware accelerator design 

Submission Guidelines

For author information and guidelines on submission criteria, please visit the Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.


Guest Editor

Bahar Asgari, University of Maryland College Park, USA

Contact Guest Editor at bahar@umd.edu or the Editor-in-Chief, Hsien-Hsin Sean Lee at lee.sean@gmail.com.