Abstracts due: 30 June 2022 (optional, will be emailed directly to guest editors)
Open for submissions in ScholarOne Manuscripts: 1 September 2022
Submissions due: 31 December 2022
Publication: November/December 2023
Artificial intelligence (AI) driven by successes in machine learning now permeates virtually all areas of our daily lives to make or at least influence decisions. In areas that impact human life (such as agriculture, climate, forestry, and health), ethical and legal aspects such as transparency, fairness, and trust in such decisions are receiving increasing attention. As a result, hundreds of ethical frameworks have now been published by organizations such as government agencies, large corporations, and academic institutions. Adopting these principles is widely seen as one of the best ways to ensure that AI does not cause unintended harm and is used safely and responsibly. However, due to the complexity of AI, it remains a challenge to implement ethical and legal frameworks for AI in practice. This special issue presents and discusses recent research on theories, tools, metrics, standards, and best practices for implementing technical, ethical, and legal frameworks for the safe and responsible use of AI.
This special issue invites original theoretical and practical research on designing, developing, presenting, testing, and evaluating approaches for AI framework implementations supporting trust in AI, including cutting-edge theories, foundations, actionable tools, and impactful case studies of AI ethical framework implementations, supported by advanced AI techniques and interdisciplinary research–in particular, social science, law, and cognitive science. The aim is to foster interdisciplinary and transdisciplinary approaches and stimulate cross-domain integration of diverse disciplines. The approaches aim at making AI ethical principles operable in applications. A suitable submission must also demonstrate its relevance to IEEE Intelligent Systems, the premier publication featuring intelligent systems and artificial intelligence, with
particular emphasis on current practice and experience, together with promising new ideas that are likely to be used in the near future. Topics of interest include:
- Approaches for ensuring calibrated trust in AI
- Guidelines of AI ethical framework implementations
- Standardization of AI ethical framework implementations
- AI lifecycle and AI ethical framework implementations
- Quantifying ethical values in AI ethics
- Minimizing risks and harms in AI ethical framework implementations
- Fairness, accountability, and transparency in AI ethical framework implementations
- Privacy, confidentiality, and security in AI ethical framework implementations
- Actionable metrics that can be measured and monitored for AI ethics
- Novel user experience design and evaluation methods for AI ethical framework implementations
- Legal and policy dimensions and implications of AI
- Best practices on AI ethical framework implementations
- Applications of AI to empower and better serve under-resourced individuals, groups, and communities
Submission Guidelines
For author information and guidelines on submission criteria, please visit the IS Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. Abstracts should be sent by email to the guest editors directly at is6-23@computer.org.
Questions?
Contact the guest editors at is6-23@computer.org.
Guest Editors
- Prof. Fang Chen, University of Technology Sydney, Australia
- Prof. Andreas Holzinger, Medical University Graz, Austria
- A/Prof. Jianlong Zhou, University of Technology Sydney, Australia
- Prof. Kenneth R. Fleischmann, University of Texas at Austin, USA
- Dr. Simone Stumpf, University of Glasgow, UK