Abstract
For many practical applications in medical image analysis and computer-aided diagnosis (CAD), it is necessary to accurately capture intricate anatomical and pathological details, given imaging acquisitions in different modalities. We introduce a novel GAN (Generative Adversarial Network) transformer-based model designed for combined super-resolution and modality translation of magnetic resonance images (MRI). The model aims to improve clinical workflows by enhancing image resolution and translating between different imaging modalities, e.g., T1 and T2 MRI data, by offering more detailed visualization that could potentially aid diagnosis and treatment planning. The approach will be validated quantitatively and qualitatively on the publicly available BraTS imaging dataset to provide a 4x increase in resolution and modality translation between T1 and T2 MRI pairs to demonstrate its potential.