Abstract
Early diagnosis of brain tumors presents significant challenges. Single-modality imaging often suffers from limitations, such as incomplete structural representation and a lack of complementary information, which reduce segmentation accuracy and complicate the identification of complex tumor scenarios. Additionally, as network depth increases, performance degradation becomes a notable issue. This research presents a multimodal fusion approach that efficiently combines complementary information from various MRI modalities to overcome these challenges. This approach mitigates the limitations of single-modality data, providing the model with richer and more comprehensive features, thereby enhancing the characterization of tumor regions. The Res-UNet model is employed to merge the benefits of the U-Net architecture for biomedical image segmentation with the strengths of residual modules. This combination effectively tackles the degradation issue linked to increased network depth, markedly enhancing segmentation accuracy, especially in defining tumor boundaries and distinguishing different tumor regions. The experimental outcomes on public datasets reveal that the proposed model surpasses existing ones in all evaluation metrics. It can preserve lesion details and precisely segment tumor boundaries. This advancement offers substantial support for clinical brain tumor diagnosis and contributes to the fields of medical imaging, bioinformatics, and related disciplines.