Call for Papers: IEEE AITest 2025

21 - 24 July 2025 | Tucson, Arizona, USA
Share this on:
Submissions Due: 1 April 2025

Important Dates

Main paper

  • Main paper submission deadline: 1 April 2025
  • Main paper author notification: 10 May 2025
  • Camera-ready and author’s registration: 1 June 2025
  • Workshop paper submission deadline: 15 May 2025
  • Workshop paper author notification: 22 May 2025
  • Camera-ready and author’s registration: 1 June 2025

Conference dates: 21-24 July 2025


IEEE AITest 2025

IEEE AITest 2025 is the seventh edition of the IEEE series conference, focusing on the synergy of artificial intelligence (AI) and software testing. This conference provides an international forum for researchers and practitioners to exchange novel research results, articulate the problems and challenges from practices, deepen our understanding of the subject area with new theories, methodologies, techniques, process models, impacts, etc., and improve the practices with new tools and resources. This year’s conference is scheduled in Tucson, Arizona, USA, from 21-24 July 2025. The conference is part of the IEEE CISOSE 2025 congress.

Learn More


Topics of Interest

  • Methodologies, theories, techniques, and tools for testing, verification, and validation of AI
  • Test Oracle for testing AI
  • Tools and resources for automated testing of AI
  • Specific concerns of testing with domain specific AI
  • AI techniques to software testing
  • AI applications to software testing
  • Testing of Large Language Models (LLMs)
  • Data quality and validation for AI
  • Human testers and AI-based testing
  • Quality assurance for unstructured training data
  • Quality evaluation and assurance for LLMs
  • LLMs for software engineering and testing
  • Responsible AI testing
  • Techniques for testing deep neural network learning, reinforcement learning, and graph learning
  • Constraint programming for test case generation and test suite reduction
  • Constraint scheduling and optimization for test case prioritization and test execution scheduling
  • Crowdsourcing and swarm intelligence in software testing
  • Genetic algorithms, search-based techniques, and heuristics to optimize testing
  • Large-scale unstructured data quality certification
  • AI and data management policies
  • Impact of GAI on education
  • Computer Vision Testing
  • Intelligent Chatbot Testing
  • Smart Machine (Robot/AV/UAV) Testing
  • Fairness, ethics, bias, and trustworthiness for LLM applications

Submission

Page Limits

Submit original manuscripts (not published or submitted elsewhere) with the following page limits:

  • regular papers (8 pages)
  • short papers (4 pages)
  • AI testing in practice (8 pages)
  • tool demo track (6 pages).

Content

We welcome submissions of both regular research papers that describe original and significant work or reports on case studies and empirical research and short papers that describe late-breaking research results or work in progress with timely and innovative ideas. The AI Testing in Practice Track provides a forum for networking, exchanging ideas, and innovative or experimental practices to address SE research that directly impacts the practice of software testing for AI. The tool track provides a forum to present and demonstrate innovative tools and/or new benchmarking datasets in the context of software testing for AI.


Formats and Submission Instruction

  • All papers must be written in English. Papers must include a title, an abstract, and a list of 4-6 keywords.
  • All types of papers can have 2 extra pages subject to page charges.
  • All papers must be prepared in the IEEE double-column proceedings format
  • Authors must submit their manuscripts via easychair IEEE AI Test 2025 by April 1, 2025, 23:59 AoE. at the latest.

For more information, please visit the conference website. The use of content generated by AI in an article (including but not limited to text, figures, images, and code) shall be disclosed in the acknowledgments section of the submitted article.