• IEEE.org
  • IEEE CS Standards
  • Career Center
  • About Us
  • Subscribe to Newsletter

0

IEEE
CS Logo
  • MEMBERSHIP
  • CONFERENCES
  • PUBLICATIONS
  • EDUCATION & CAREER
  • VOLUNTEER
  • ABOUT
  • Join Us
CS Logo

0

IEEE Computer Society Logo
Sign up for our newsletter
FacebookTwitterLinkedInInstagramYoutube
IEEE COMPUTER SOCIETY
About UsBoard of GovernorsNewslettersPress RoomIEEE Support CenterContact Us
COMPUTING RESOURCES
Career CenterCourses & CertificationsWebinarsPodcastsTech NewsMembership
BUSINESS SOLUTIONS
Corporate PartnershipsConference Sponsorships & ExhibitsAdvertisingRecruitingDigital Library Institutional Subscriptions
DIGITAL LIBRARY
MagazinesJournalsConference ProceedingsVideo LibraryLibrarian Resources
COMMUNITY RESOURCES
GovernanceConference OrganizersAuthorsChaptersCommunities
POLICIES
PrivacyAccessibility StatementIEEE Nondiscrimination PolicyIEEE Ethics ReportingXML Sitemap

Copyright 2025 IEEE - All rights reserved. A public charity, IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity.

  • Home
  • /Digital Library
  • /Magazines
  • /Ex
  • Home
  • / ...
  • /Magazines
  • /Ex

CLOSED: Call for Papers: Special Issue on Multimodal Neural Network Pretraining

IEEE Intelligent Systems seeks submissions for this upcoming special issue.

Important Dates

  • Submissions Due: 30 September 2023
  • Publication: November/December 2024


Pre-training model has recently become a new advanced paradigm of deep model initialization that establishes the state-of-the-art performance for many multimedia analysis tasks. Within the trend, a variety of large-scale pre-trained models have been developed and deployed to promote model robustness and uncertainty estimates in various multimedia applications. Among them, BERT, GPT, ViT, UNITER and their variants have achieved a great success and become new milestones in the vision, language, and various multimedia fields. By storing sophisticated knowledge into a large number of parameters, pre-trained models are capable of capturing semantic relations among a large number of labeled and unlabeled data with self-supervised learning in advance, and then provide stronger representations for a variety of downstream tasks. Although there is an emerging trend of pre-training models, it remains under-explored aspects of different models for various applications. Therefore, it is highly demanding to make a comprehensive review and comparison of the latest breakthroughs and designs of model pre-training, including theories, algorithms, and applications. Moreover, building advanced pre-training architectures and predicting new research directions in new research fields are of prominent importance in multimedia intelligence. 

This special issue will offer a timely collection of original contributions of works to benefit the researchers and practitioners in the research fields of multi-modal learning and multimedia understanding in intelligent systems. The concerned research problems should be covered by the multimedia community as well as the topic of interest of IEEE Intelligent Systems. 

Topics of interest include, but are not limited to:

  • New architectures, theories, and applications on multi-modal model pre-training
  • Fine-tuning and adaptation for multi-modal model pre-training
  • Efficient multi-modal pre-training architectures
  • Knowledge distillation and model compression for multi-modal model pre-training
  • Cognitive- or knowledge-inspired multi-modal pre-training architectures
  • Applications of multi-modal model pre-training in various multimedia areas
  • Survey or review recent advancements on a multi-modal pre-trained model


Submission Guidelines

For author information and guidelines on submission criteria, please visit the IS Author Information page. Please submit papers through the ScholarOne system, and be sure to select the special-issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal.


Questions?

Email the guest editor at is6-24@computer.org.

Guest Editors:

  • Can Wang (lead), Griffith University, Gold Coast, QLD, Australia
  • Zheng Zhang, Harbin Institute of Technology, Shenzhen, China
  • Lei Zhu, Shandong Normal University, Jinan, China
  • Jianxin Li, Deakin University, Melbourne, Australia 

LATEST NEWS
CV Template
CV Template
A History of Rendering the Future with Computer Graphics & Applications
A History of Rendering the Future with Computer Graphics & Applications
AI Assisted Identity Threat Detection and Zero Trust Access Enforcement
AI Assisted Identity Threat Detection and Zero Trust Access Enforcement
Resume Template
Resume Template
IEEE Reveals 2026 Predictions for Top Technology Trends 
IEEE Reveals 2026 Predictions for Top Technology Trends 
Read Next

CV Template

A History of Rendering the Future with Computer Graphics & Applications

AI Assisted Identity Threat Detection and Zero Trust Access Enforcement

Resume Template

IEEE Reveals 2026 Predictions for Top Technology Trends 

7 Best Practices for Secure Software Engineering in 2026

Muzeeb Mohammad: IEEE Computer Society Leader in Cloud Tech

Setting the Standard: How SWEBOK Helps Organizations Build Reliable and Future-Ready Teams

Get the latest news and technology trends for computing professionals with ComputingEdge
Sign up for our newsletter