The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. To combat this societal issue, many organizations have aimed to develop timely, relevant, and actionable intelligence about emerging threats and key threat actors to enable effective cybersecurity decisions. This process, also referred to as Cyber Threat Intelligence (CTI), has quickly emerged as a key aspect of cybersecurity. At its core, CTI is a data-driven process that relies on the systematic and large-scale analysis of log files, malware binaries, events, Open Source Intelligence (OSINT), and other rapidly evolving cybersecurity data sources. Artificial intelligence (AI)-based methods such as machine learning, data mining, text mining, network science, and deep learning hold significant promise in sifting through large quantities of structured, unstructured, and semi-structured cybersecurity data to deliver novel CTI capabilities with unprecedented efficiency and effectiveness. Despite their rapid proliferation through the academic and industry CTI landscape, AI methods are often black boxes. As a result, it is often unclear how and/or why an algorithm executed its decision-making process. Lack of interpretability can affect model performance, prevent systematic model tuning, and reduce algorithm trustworthiness. Ultimately, these drawbacks hinder key stakeholders (e.g., security analysts) from effectively leveraging AI-based decisions for critical CTI tasks (e.g., security control deployment).
In light of these critical limitations, this special issue seeks high-quality papers related to emerging applications, techniques, and methodologies related to Explainable Artificial Intelligence (XAI) for CTI applications. Topics of interest include, but are not limited to:
- Interpretable multi-view representation learning for fusing disparate CTI data sources (e.g., threat feeds)
- Interpretable adversarial learning for CTI applications
- Explainable deep learning on graph structured cybersecurity data
- Real-time XAI for cyber threat detection
- Explainable Deep Bayesian learning for CTI
- Intelligent feature selection for interpretable CTI analytics (e.g., malware analysis, IP reputation services, etc.)
- XAI-based diachronic linguistics to detect emerging threats from Social Media Intelligence (SOCINT)
- Dark Web Analytics for Proactive Cyber Threat Intelligence applications
- Explainable OSINT analytics for cybersecurity applications
- XAI methods for Internet of Things (IoT) fingerprinting, anomaly detection, network telescopes, measurements, and others
- Fusion of emerging XAI-based methods with conventional CTI analytics (e.g., event correlation, IP reputation services)
- XAI for CTI augmentation (e.g., human-in-the-loop systems)
All accepted manuscripts are expected to make a significant scientific contribution. Contributions in this special issue include, but are not limited to, novel representations of emerging cybersecurity data, novel algorithms, and new CTI systems. Each manuscript must clearly articulate their data (e.g., key metadata, statistical properties, etc.), analytical procedures (e.g., representations, algorithm details, etc.), and evaluation set up and results (e.g., performance metrics, statistical tests, case studies, etc.). Making data, code, and processes publicly available to facilitate scientific reproducibility is not required, but is strongly encouraged. Given the scope of this special issue, articles are expected to clearly articulate the how and why their proposed approaches fall into the category of XAI.
Important Dates
Manuscript Submission Deadline: 1 February 2021
First Round of Reviews: 1 May 2021
Revised Papers Due: 31 July 2021
Final Notification: 30 September 2021
Final Manuscript Due: 31 October 2021
Tentative Publication Date: November/December 2021 or January/February 2022
Submission Guidelines
Papers submitted to this special issue for possible publication must be original and must not be under consideration for publication in any other journal or conference. TDSC requires meaningful technical novelty in submissions that extend previously published conference papers. Extension beyond the conference version(s) is not simply a matter of length. Thus, expanded motivation, expanded discussion of related work, variants of previously reported algorithms, incremental additional experiments/simulations, may provide additional length but will fall below the line for proceeding with review. Please read the Author Information and Journal Peer Review pages. Submissions must be directly submitted via the TDSC submission website at https://mc.manuscriptcentral.com/tdsc-cs.
Guest Editors
- Dr. Hsinchun Chen (hchen@eller.arizona.edu), Regents Professor, Management Information Systems, University of Arizona
- Dr. Bhavani Thuraisingham (bhavani.thurasingham@utdallas.edu), Professor, Computer Science, University of Texas at Dallas
- Dr. Murat Kantarcioglu (muratk@utdallas.edu), Professor, Computer Science, University of Texas at Dallas
- Dr. Sagar Samtani (ssamtani@iu.edu), Assistant Professor, Operations and Decision Technologies, Indiana University
Questions can be directed to the guest editor team. When contacting the guest editors, please ensure that multiple editors are included in the correspondence to ensure timely turnaround of inquiries.