MRI Cross-Modal Synthesis: A Comparative Study of Generative Models for T1-to-T2 Reconstruction

MRI Cross-Modal Synthesis: A Comparative Study of Generative Models for T1-to-T2 Reconstruction
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

MRI cross-modal synthesis involves generating images from one acquisition protocol using another, offering considerable clinical value by reducing scan time while maintaining diagnostic information. This paper presents a comprehensive comparison of three state-of-the-art generative models for T1-to-T2 MRI reconstruction: Pix2Pix GAN, CycleGAN, and Variational Autoencoder (VAE). Using the BraTS 2020 dataset (11,439 training and 2,000 testing slices), we evaluate these models based on established metrics including Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM). Our experiments demonstrate that all models can successfully synthesize T2 images from T1 inputs, with CycleGAN achieving the highest PSNR (32.28 dB) and SSIM (0.9008), while Pix2Pix GAN provides the lowest MSE (0.005846). The VAE, though showing lower quantitative performance (MSE: 0.006949, PSNR: 24.95 dB, SSIM: 0.6573), offers advantages in latent space representation and sampling capabilities. This comparative study provides valuable insights for researchers and clinicians selecting appropriate generative models for MRI synthesis applications based on their specific requirements and data constraints.


💡 Research Summary

This paper presents a systematic comparative study of three state‑of‑the‑art generative models—Pix2Pix GAN, CycleGAN, and Variational Autoencoder (VAE)—for the task of synthesizing T2‑weighted MRI from T1‑weighted inputs. Using the publicly available BraTS 2020 dataset, the authors extracted 57,195 two‑dimensional slices from the multimodal brain tumor scans, randomly selecting 11,439 slices for training and 2,000 slices for testing. All images were resized to 256 × 256 pixels and intensity‑normalized to the range


Comments & Academic Discussion

Loading comments...

Leave a Comment