2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
Download PDF

Abstract

High annotation costs serve as a significant hurdle in deploying modern deep learning architectures for clinically relevant medical applications, especially when dealing with the inherent heterogeneity of multimodal data, proving the critical need for innovative algorithms that can effectively utilize unlabeled data. In this paper, we propose a model named MCLCA, which integrates multimodal contrastive learning and cross-modal attention to diagnose Alzheimer’s Disease (AD) and identify biomarkers using both labeled and unlabeled multimodal brain imaging genetics data. Through multimodal contrastive learning, MCLCA can effectively learn representations even in the absence of sufficient labels. By utilizing cross-modal attention blocks, the model captures deep connections between different modalities, providing a more comprehensive view of diagnosis. Our proposed MCLCA model is evaluated using the ADNI database with three imaging modalities (VBM-MRI, FDG-PET, and AV45-PET) and genetic SNP data. The results demonstrate that MCLCA can identify important biomarkers with better prediction accuracy compared to the existing methods. The source code is available at https://github.com/MCLCA.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles