From Pre- to Intra-operative MRI: Predicting Brain Shift in Temporal Lobe Resection for Epilepsy Surgery
Introduction: In neurosurgery, image-guided Neurosurgery Systems (IGNS) highly rely on preoperative brain magnetic resonance images (MRI) to assist surgeons in locating surgical targets and determining surgical paths. However, brain shift invalidates the preoperative MRI after dural opening. Updated intraoperative brain MRI with brain shift compensation is crucial for enhancing the precision of neuronavigation systems and ensuring the optimal outcome of surgical interventions. Methodology: We propose NeuralShift, a U-Net-based model that predicts brain shift entirely from pre-operative MRI for patients undergoing temporal lobe resection. We evaluated our results using Target Registration Errors (TREs) computed on anatomical landmarks located on the resection side and along the midline, and DICE scores comparing predicted intraoperative masks with masks derived from intraoperative MRI. Results: Our experimental results show that our model can predict the global deformation of the brain (DICE of 0.97) with accurate local displacements (achieve landmark TRE as low as 1.12 mm), compensating for large brain shifts during temporal lobe removal neurosurgery. Conclusion: Our proposed model is capable of predicting the global deformation of the brain during temporal lobe resection using only preoperative images, providing potential opportunities to the surgical team to increase safety and efficiency of neurosurgery and better outcomes to patients. Our contributions will be publicly available after acceptance in https://github.com/SurgicalDataScienceKCL/NeuralShift.
💡 Research Summary
This paper introduces NeuralShift, a deep‑learning framework designed to predict intra‑operative brain deformation (brain shift) solely from pre‑operative magnetic resonance images (pMRI) in patients undergoing temporal lobe resection for epilepsy. Brain shift, caused by cerebrospinal fluid drainage, gravity, tissue resection, and other intra‑operative factors, degrades the accuracy of image‑guided neurosurgery systems that rely on pre‑operative scans. While intra‑operative MRI (iMRI) can capture the deformation, its high cost, acquisition time, and limited availability restrict routine clinical use. NeuralShift aims to provide a rapid, pre‑operative estimate of the deformation field, enabling surgeons to anticipate and compensate for brain shift without waiting for iMRI.
Dataset and Pre‑processing
The authors assembled a retrospective cohort of 98 patients from the National Hospital for Neurology and Neurosurgery (London), each providing a paired pre‑operative T1‑weighted MRI and an iMRI acquired after tissue resection but before closure. The iMRI includes the resection cavity and associated intensity heterogeneity. To bring both modalities into a common anatomical space, a bespoke pipeline was built: three anatomical landmarks (anterior commissure, posterior commissure, inter‑commissural height) were manually annotated on each scan, defining an AC‑PC‑IH orthogonal coordinate system. Images were reoriented, rigidly registered, and then aligned to the MNI template via affine registration and skull stripping. Intensity normalisation, bias‑field correction, and cropping produced standardized 3‑D volumes for network input.
Surrogate Ground‑Truth Generation
Because true voxel‑wise displacement fields are unavailable, the authors generated surrogate labels using non‑rigid registration. The pre‑operative scan was registered to its corresponding iMRI using NiftyReg’s Fast Free‑Form Deformation (F3D) algorithm, yielding a cubic B‑spline deformation model. From this model, a dense displacement field u(x)=y(x)−x (in millimetres) was extracted. Additionally, the iMRI brain mask was obtained by intensity thresholding, and its signed distance function (SDF) was computed. These three items (displacement field, mask, SDF) constitute the supervision targets.
Model Architecture
NeuralShift employs a 3‑D U‑Net as the inference module. Input channels consist of the normalized pre‑operative MRI and a binary “half‑mask” indicating the hemisphere slated for resection (left or right), providing coarse laterality information without explicit cavity geometry. The network outputs three volumes: (1) a dense displacement field, (2) an intra‑operative brain mask, and (3) the corresponding SDF. Predicting the mask and SDF alongside the displacement field supplies global shape constraints that pure voxel‑wise regression lacks, especially in the presence of resection cavities.
Loss Functions
Training optimises a weighted multi‑task loss:
- Displacement loss: Cartesian mean‑squared error (MSE) plus an auxiliary spherical‑coordinate loss that penalises errors in elevation angle, azimuth, and magnitude, encouraging accurate direction as well as magnitude.
- Mask loss: Dice loss combined with an edge loss based on morphological dilation to sharpen boundary alignment.
- SDF loss: voxel‑wise MSE between predicted and ground‑truth signed distances. The overall loss is L = α·(MSE + spherical) + β·(Dice + edge) + γ·MSE_SDF, with α, β, γ tuned to balance the different scales.
Experimental Design and Results
A 9‑fold cross‑validation scheme ensured every patient contributed to both training and testing. Performance was measured by:
- Dice Similarity Coefficient (Dice) between predicted and ground‑truth intra‑operative brain masks.
- Target Registration Error (TRE) computed on ten anatomical landmarks (including those on the resection side and midline) after warping the pre‑operative scan with the predicted displacement field.
NeuralShift achieved a mean Dice of 0.97 ± 0.01, indicating near‑perfect overlap of predicted and actual brain masks. Landmark TRE averaged 1.12 mm (range 0.9–1.5 mm), substantially lower than previously reported CNN‑based methods (e.g., W‑Net) and comparable to or better than biomechanical FEM approaches that typically require several minutes of computation. Ablation studies demonstrated that removing mask or SDF supervision degraded Dice to 0.92 and increased TRE to ~1.8 mm, confirming the importance of global shape constraints.
Contributions and Availability
The paper’s primary contributions are:
- A robust pre‑to‑intra‑operative registration pipeline that accounts for resection laterality and produces high‑quality deformation targets.
- A U‑Net‑based architecture jointly learning displacement fields, brain masks, and SDFs, achieving state‑of‑the‑art accuracy.
- Comprehensive validation with cross‑validation, TRE analysis, and ablation experiments.
All code, trained models, and documentation will be released publicly at https://github.com/SurgicalDataScienceKCL/NeuralShift upon acceptance.
Limitations and Future Work
Current limitations include reliance on a coarse half‑mask rather than precise cavity geometry, potential bias introduced by using registration‑derived surrogate labels instead of true biomechanical measurements, and validation on a single institution’s T1‑weighted data only. Future directions suggested are: (i) integrating intra‑operative ultrasound or electrophysiological maps as additional inputs, (ii) combining patient‑specific biomechanical parameters with the learned model for a hybrid approach, (iii) expanding to multi‑center, multi‑sequence datasets for broader generalisation, and (iv) leveraging predicted deformation fields to adapt surgical trajectories and provide real‑time risk alerts.
Overall Impact
NeuralShift demonstrates that deep learning can predict clinically relevant brain shift with millimetre‑level accuracy using only pre‑operative imaging, potentially reducing dependence on costly intra‑operative MRI, shortening operative time, and improving the safety and efficacy of image‑guided neurosurgery. By making the method openly available, the authors invite the community to build upon this foundation toward more personalized, real‑time navigation solutions in brain surgery.
Comments & Academic Discussion
Loading comments...
Leave a Comment