Call For Papers: Special Issue on Security and Privacy of Generative AI

IEEE Security & Privacy seeks submissions for this upcoming special issue.
Share this on:
Submissions Due: 13 February 2025

Important Deadlines:
Submission deadline: 13 February, 2025
Publication: September/October 2025


Deep Learning has made remarkable progress in various real-world applications ranging from robotics and image processing to medical applications. While there are many deep learning approaches and algorithms used today, few have made such a widespread impact as those belonging to the generative artificial intelligence (AI) domain. Generative AI involves the development of models which learn the underlying distribution of training data. Such models are then capable of generating new data samples with characteristics similar to those of the original dataset. Common examples of generative AI include generative adversarial networks (GANs), variational autoencoders (VAE), and transformers. In the last few years, generative AI and AI chatbots have made revolutionary progress not only from the technical side but also through their societal impact. As such, generative AI moved from being only a research topic into something equally interesting for academia, industry, and general users.

One domain where generative AI is making significant improvements is security by allowing better, more secure designs but also more powerful evaluations of the security of systems. Unfortunately, generative AI is also susceptible to various attacks, which undermine its security and privacy. This special issue is dedicated to showcasing the latest technical advancements in emerging technologies related to generative AI and security.

TOPIC SUMMARY:

To give a comprehensive introduction, we plan to solicit papers presenting the latest developments in all aspects of security and generative AI. With the broad scope, we prioritize different topics as follows:

  1. Generative AI for security. This special issue is highly interested in the development of new AI-based attacks and defenses that use generative AI as a tool to improve/evaluate the security of systems. Potential topics include generative AI and malware analysis, generative AI and code generation, and generative AI and cryptography.
  2. Security of generative AI. This special issue looks forward to featuring papers that concentrate on the security of generative AI. Within this topic, we are interested in all flavors and input data types (images, text, sound, etc.) commonly used in generative AI. Possible topics of interest include adversarial examples, poisoning attacks, and centralized and decentralized settings. 

We invite submissions that extend and challenge current knowledge about the intersection of generative AI and security.

Suggested topics include, but are not limited to: 

  • Implementation attacks and generative AI
  • Malware analysis and generative AI
  • Security benchmarking of generative AI (LLMs)
  • Code generation, code line anomalies, and bug fixes with generative AI
  • Hardware design with generative AI
  • Watermarking and copyright protection of generative AI
  • Adversarial examples
  • Poisoning attacks
  • Privacy of generative AI
  • Jailbreaking attacks
  • Prompt injection and stealing attacks
  • Sponge attacks
  • Federated and decentralized learning
  • Explainable AI (XAI)
  • Safety of AI agents
  • Toxicity and harmfulness of AI-generated content
  • Detection of Deepfakes 
  • Red-teaming of generative AI (LLMs)
  • Fairness and machine interpretability

Submission Guidelines

For author information and submission criteria for full papers, please visit the Author Information page. As stated there, full papers should be 4900 – 7200 words in length. Please submit full papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. There should be no more than 15 references. Related work should appear in a special separated box. Please submit only full papers intended for peer review, not opinion pieces, to the ScholarOne portal.


Questions?

Contact the guest editors at sp5-25@computer.org.

  • Stjepan Picek, Radboud University, The Netherlands
  • Lorenzo Cavallaro, University College London, UK
  • Jason Xue, CSIRO’s Data61, Australia