2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
Download PDF

Abstract

The task of brain tumor segmentation necessitates the processing of long Magnetic Resonance Imaging (MRI) sequences across multiple imaging modalities. Traditional convolutional neural networks often exhibit suboptimal ability of long-term memory, while Transformers demand substantial computational resources. To mitigate computational requirements and optimally utilize the information from various imaging modalities, we propose Mamba-based Vote and Cooperate Segmentation (VCSeg), for brain tumor segmentation. Features from the imaging sequences of each modality are extracted from multiple spatial orientations, and assigned different weights based on both modality and orientation considerations, enabling the model to adaptively learn modality-specific feature distribution for adult and child patient situation. The model employs a multi-modal cooperate module to further enhance the feature encoding capabilities. Additionally, deep supervision is applied to achieve progressive mask generation, thereby improving decoding quality. Experiments conducted on the BraTS2019-MEN and BraTS2023-PED datasets have demonstrated that VCSeg achieves state-of-the-art results, demonstrating the effectiveness of our advanced method.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles