Call for Papers: Special Issue AI Failures: Causes, Implications, and Prevention, Vol 2

Computer seeks submissions for this upcoming special issue.
Share this on:
Submissions Due: 17 March 2025

Important Dates

  • Submissions due: 17 March 2025
  • Publication date: November 2025

Call for Papers

We are witnessing a breathtaking pace of advancement in intelligent and autonomous systems development and deployment all around us. Along with this fast proliferation, we are also facing the reality of situations where autonomous learning systems are failing, malfunctioning, and producing undesirable outcomes, in some cases with devastating consequences. Attempting to put a brake on this rapid growth of artificial intelligence is likely to be futile. Rather, we need a rigorous focus on ensuring the reliability of these systems.

We learn more from analyzing failures in engineering than by studying successes. There is significant value in documenting and tracking AI failures in sufficient detail to understand their root causes, and to put processes and practices in place toward preventing similar problems in the future. Efforts to track and record vulnerabilities in traditional software led to the establishment of a National Vulnerability Database, which has contributed toward understanding vulnerability trends, their root causes, and how to prevent them.

Computer magazine has published a special issue on AI Failures in November 2024. With the success of that issue and seeing a growing interest in the area, Computer is soliciting papers for a follow-up special issue on AI Failures: Causes, Implications, and Prevention.

As with the previous special issue, this one will continue to explore AI failures, from early systems to recent ones. Papers should discuss the causes of the failures, their implications for the field of AI, what can be learned from them, and how we can build AI systems that are likely to avoid such failures.

 

Topics of interest include, but are not limited to:

  • Failure modes in different types of AI systems
    • Neural network based learning systems
    • Reinforcement learning based systems
    • Expert/rule based systems
    • Large Language Models
    • Generative AI systems 
  • Failure of AI systems in different domains
    • Recommendation systems
    • Autonomous vehicles
    • Decision aiding tools
    • Diagnostic Systems
    • Medical devices
    • Robotics
  • The causes of AI failures
    • Inadequacy in the training data
    • Inadequacy of the testing data or process
    • Issues with human interaction with AI/machines
    • Adversarial attacks on AI systems
    • Failure due to the inadequacy in transfer learning
    • Failure due to evolution of the environment
  • The implications of AI failures
    • Quantification of loss from AI failures
    • The impact of trust and acceptance
    • Societal and legal implications
    • Economic impact
    • Regulatory Issues
  • What can be learned from the failures
    • Importance of assurance metrics and methods
    • Testing methods and adequacy
    • Fault tolerance techniques
    • Root cause analysis
  • How to avoid AI failures in the future
    • Building reliability into the process
    • Test adequacy for AI systems
    • Continuous monitoring and adjustment
    • Documentation and reporting of failures
    • Safety/security analysis methods for AI/ML
    • Integration of explainability

Submissions should be original and unpublished. 


Submission Guidelines

For author information and guidelines on submission criteria, visit the Author’s Information page. Please submit papers through the IEEE Author Portal and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts. If requested, abstracts should be sent by email to the guest editors directly.


Questions?