Important Dates
- Submission Deadline: 30 November 2023
- Notification of Acceptance: 30 March 2024
- Final Paper Deadline: 30 May 2024
Publication: June 2024
LLMs have empowered challenging applications such as machine translation, text classification, question answering, and text generation. Among these models, ChatGPT and GPT-4 excel in generating human-like responses across diverse domains, including customer service, language learning, and chatbots.
This call for papers invites submissions on LLMs and multimodal large models, with a specific focus on their capabilities, challenges, limitations, risks and ethics. We encourage authors to report original research, high-quality reviews and perspective, and key applications covering a broad spectrum of topics including but not limited to
- Theoretical aspects of LLMs and multimodal large models, their immerging behaviours;
- Optimization techniques for training and inference in LLMs and multimodal large models, especially promoting techniques;
- Efficient In-context learning and prompt tuning techniques for LLMs and multimodal large models;
- Evaluation and benchmarking of large models, encompassing language-only and cross-modal data, especially fusion of language, vision, and other modalities;
- Interpretation of LLMs and multimodal large models, enhancing transparency in decision-making processes;
- Architectures for these large models and their optimization;
- Sharing and security of large model associated big data;
- Applications of LLMs and multimodal large models in healthcare, transportation, finance, education, remote sensing, and entertainment;
- Integration of large models with other AI technologies, such as computer vision and speech recognition, and robotics;
- Novel uses of large models for data intelligence and knowledge discovery, enabling deep insights from heterogeneous data sources;
- Social implications of large models, considering impacts on diverse stakeholders and ensuring fairness;
- Cross-modal generative AI focuses on synthesis of data across multiple modalities, such as text, images, audio, and video;
- Detection techniques of AI generated content (AIGC) by LLMs and multimodal large models.
By fostering interdisciplinary collaboration and facilitating the exchange of ideas, this special issue will contribute to the development and utilization of LLMs, multimodal large models, cross-modal generative AI, and synthetic yet realistic big data, ultimately benefiting humanity at large. We welcome submissions from academic researchers and industry practitioners at the forefront of these areas.
Submission Guidelines
For author information and guidelines on submission criteria, please visit the TBD’s Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.
Questions?
For inquiries or further information, please contact the lead guest editor Dr. Guang Yang at g.yang@imperial.ac.uk
- Guang Yang, Imperial College London
- Jing Zhang, University of Sydney
- Giorgos Papanastasiou, Archimedes Unit, Athena Research Centre, Greece
- Ge Wang, Rensselaer Polytechnic Institute
- Dacheng Tao. University of Sydney