Deep Biomechanically-Guided Interpolation for Keypoint-Based Brain Shift Registration
Accurate compensation of brain shift is critical for maintaining the reliability of neuronavigation during neurosurgery. While keypoint-based registration methods offer robustness to large deformations and topological changes, they typically rely on simple geometric interpolators that ignore tissue biomechanics to create dense displacement fields. In this work, we propose a novel deep learning framework that estimates dense, physically plausible brain deformations from sparse matched keypoints. We first generate a large dataset of synthetic brain deformations using biomechanical simulations. Then, a residual 3D U-Net is trained to refine standard interpolation estimates into biomechanically guided deformations. Experiments on a large set of simulated displacement fields demonstrate that our method significantly outperforms classical interpolators, reducing by half the mean square error while introducing negligible computational overhead at inference time. Code available at: \href{https://github.com/tiago-assis/Deep-Biomechanical-Interpolator}{https://github.com/tiago-assis/Deep-Biomechanical-Interpolator}.
💡 Research Summary
Accurate compensation of brain shift is essential for reliable neuronavigation during neurosurgery, yet conventional image‑based registration struggles with large deformations, modality gaps, and tissue resection. Keypoint‑based registration mitigates many of these issues by relying on sparse anatomical correspondences, but it typically uses simple geometric interpolators (linear or thin‑plate spline) that ignore the biomechanical properties of brain tissue, leading to physically implausible dense displacement fields.
This paper introduces a deep biomechanically‑guided interpolation framework that produces dense, physically realistic deformation fields from a limited set of matched keypoints. The authors first generate a large synthetic dataset using biomechanical simulations. Starting from the UPenn‑GBM cohort (162 patients), they segment brain parenchyma, cerebrospinal fluid, skull, and tumor using SynthSeg and manual tumor masks. These segmentations are converted into surface meshes and fed into the Meshless Total Lagrangian Explicit Dynamics (MTLED) simulator, which models gravity‑induced deformation and the mechanical response to tumor resection with an Ogden material model. For each patient, 1–3 distinct deformation fields are simulated, yielding 204 ground‑truth displacement volumes.
To emulate intra‑operative keypoints, the authors extract hundreds of 3D SIFT landmarks from each pre‑operative MRI and randomly sample M of them (M=20 in the main experiments). The corresponding ground‑truth displacements are obtained from the synthetic fields, forming a sparse displacement set. An initial dense field (ϕ_init) is created using either linear Delaunay interpolation or thin‑plate spline (TPS).
The core of the method is a residual 3D U‑Net that refines ϕ_init. The network receives the pre‑operative image (I_pre) and ϕ_init, predicts a residual displacement ε_θ, and outputs the final field ϕ_pred = ϕ_init + ε_θ. The architecture comprises four resolution levels (32→64→128→256 channels), residual blocks with squeeze‑and‑excitation modules, instance normalization, and LeakyReLU activations. Skip connections are summed rather than concatenated, keeping the model lightweight (≈7.3 M parameters).
Training is fully supervised using the synthetic ground‑truth fields. The loss combines a voxel‑wise mean squared error (MSE) and a Jacobian determinant regularizer applied only to healthy brain voxels (Ω_healthy). The regularizer penalizes negative Jacobians (via ReLU) to enforce local orientation consistency and prevent folding. λ_reg is set to 50 after validation. Data augmentation follows nnU‑Net practices (Gaussian noise, blur, intensity scaling, resolution reduction). The model is optimized with Adam (lr = 5 × 10⁻⁴), batch size = 1, for 100 epochs.
Evaluation uses several metrics: overall and tumor‑region MSE (mm²), maximum Euclidean error, 95th percentile Hausdorff distance (HD95) between warped segmentations, percentage of voxels with non‑positive Jacobian determinants, and inference time. Experiments on the held‑out test set (25 cases) show that the proposed method halves the MSE compared to linear interpolation (10.7 → 3.7 mm²) and reduces it by ~48 % compared to TPS (6.5 → 3.4 mm²). Maximum error and HD95 also improve, while the percentage of negative Jacobians remains low (≈0.6 %). Inference time is comparable to the baselines (≈1.8 s for linear‑based, 0.58 s for TPS‑based), demonstrating negligible computational overhead.
Ablation studies confirm the contribution of each component: removing the residual network or the Jacobian regularizer degrades performance, and the model remains robust across different numbers of keypoints (5–50). The authors discuss the limitation that training relies on synthetic data; domain adaptation to real intra‑operative modalities (iMRI, iUS) will be required. They also suggest future work on automatic keypoint detection, integration with real‑time surgical workflows, and model compression for on‑device deployment.
In summary, by coupling large‑scale biomechanical simulations with a residual 3D U‑Net, the paper delivers a practical, accurate, and computationally efficient solution for keypoint‑based brain shift registration, bridging the gap between sparse intra‑operative observations and dense, biomechanically plausible deformation fields.
Comments & Academic Discussion
Loading comments...
Leave a Comment