Call for Papers: Special Issue on The Impact of AI on Productivity and Code

IEEE Software seeks submissions for this upcoming special issue.
Share this on:
Submissions Due: 14 August 2025

Publication: May/June 2026


Overview

Is AI truly the key to writing code faster and better? Or do alternative innovations, such as improved user interfaces [8] or other recent breakthroughs in software design [6-7], also play a significant role in enhancing developer productivity and programmer education?

In light of recent advances in AI, there has been no shortage of claims about its ability to transform the developer experience and teaching. The web is filled with promises of vast improvements, often linked to the power of large language models (LLMs) [1-2]. These tools, such as GitHub Copilot and Supermaven, assert they can make coding faster and smarter by automating tasks, enhancing code quality, and streamlining development. For example, the GitHub Copilot website says their tool enables “55% faster coding’,’ while Supermaven’s website claims it enables developers to “write code 2x faster with AI”. Amazon Q Developer’s website says their tool enables “up to 40%” increase in developer productivity.

Moreover, concerns have been raised about whether the speed offered by AI-assisted coding tools may come at the cost of code quality [2-4] and/or comprehension of code. Some studies suggest a “downward pressure on code quality” [2] and security risks [5] when relying heavily on AI-generated code. While LLMs have undoubtedly proven useful in certain areas, the accuracy of AI-generated suggestions often requires scrutiny to avoid introducing bugs or vulnerabilities.

Given these considerations, it is time for a deeper, data-driven investigation. We encourage studies that critically examine the impact of AI on developer productivity, code quality, and developer education. Particularly welcome are industrial case studies or case studies from the classroom that showcase real-world applications of AI tools. We also invite academic researchers to contribute to this discussion.

To move forward, we propose an objective evaluation. Let us search the web for these claims and test their validity through rigorous, evidence-based inquiry. By doing so, we aim to provide a clearer picture for practitioners, researchers, and educators, ensuring that decisions about adopting AI in development are informed by solid, empirical evidence.

Focus

We invite researchers, practitioners, industry experts, and educators to submit original perspectives to explore aspects of developer productivity (or education) that include, but are not limited to the following:

  • Industrial perspectives or experience reports (where a one-off case study offers insights into the value, or otherwise, of some AI tool)
  • Teaching perspectives or experience reports that comment on the effects of these AI tools on the education experience. Literature reviews of claims made by vendors and of studies testing those claims
  • Meta-reviews of prior studies in this area (ideally, analyzing results from multiple prior studies’ data and drawing larger-scale conclusions)
  • Critical, unbiased evaluations of tooling (e.g. with GitHub Copilot and other tools)
  • Industry perspectives on other hindrances and facilitators of productivity, such as organizational policies, team dynamics, workplace culture, management styles, and remote work and in-office policies
  • Proposals for new methods, tooling, or any combination supported by evidence
  • Perspectives on how AI tools (including LLMs such as ChatGPT and Claude) impact the education of current students and developers in training. For example, this includes evidence-based notes from faculty on shifting trends in SE education and the role of AI.

Note that any industrial case studies should disclose any conflicts of interest with the AI vendor.


Submission Guidelines

For author information and guidelines on submission criteria, visit the IEEE Software Author Information page. Please submit papers through the IEEE Author Portal system, and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the IEEE Author Portal.

This issue will accept short and regular papers.

  • Regular papers must not exceed 4,200 words, including figures and tables, which count for 250 words each.
  • Shorter reports of one-off case studies are also encouraged (1500 words+)

Submissions in excess of these limits may be rejected without refereeing. The articles we deem within the theme and scope will be peer-reviewed and are subject to editing for magazine style, clarity, organization, and space. Be sure to include the name of the theme you’re submitting for.

Articles should have a practical orientation and be written in a style accessible to practitioners and educators. Overly complex, purely research-oriented or theoretical treatments aren’t appropriate. Articles should be novel. IEEE Software doesn’t republish material published previously in other venues, including other periodicals and formal conference or workshop proceedings, whether previous publication was in print or electronic form.

[1]: See for example, [Cursor](https://www.cursor.com/), [GitHub Copilot](https://github.com/features/copilot), [Supermaven](https://supermaven.com/), and [Amazon Q Developer](https://aws.amazon.com/q/developer/).

[2]: https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality

[3]: Nguyen, N., & Nadi, S. (2022, May). An empirical evaluation of GitHub copilot’s code suggestions. In Proceedings of the 19th International Conference on Mining Software Repositories (pp. 1-5).

[4]: Dakhel, A. M., Majdinasab, V., Nikanjam, A., Khomh, F., Desmarais, M. C., & Jiang, Z. M. J. (2023). Github copilot ai pair programmer: Asset or liability?. Journal of Systems and Software, 203, 111734.

[5]: Perry, N., Srivastava, M., Kumar, D., & Boneh, D. (2023, November). Do users write more insecure code with AI assistants?. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (pp. 2785-2799).

[6]: See [Kakoune](https://kakoune.org/) and [Helix](https://helix-editor.com/), for example.

[7]: https://github.com/chrisgrieser/nvim-various-textobjs

[8]: See [lazygit](https://github.com/jesseduffield/lazygit) and [GitLens](https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens), for example.

[9]: https://wasp-lang.dev/

[10]: See [Bun for Node.js projects](https://bun.sh/), [uv for Python](https://docs.astral.sh/uv/)


Questions? Contact the Lead Guest Editor at timm@ieee.org.