Beyond Core and Penumbra: Bi-Temporal Image-Driven Stroke Evolution Analysis

Beyond Core and Penumbra: Bi-Temporal Image-Driven Stroke Evolution Analysis
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Computed tomography perfusion (CTP) at admission is routinely used to estimate the ischemic core and penumbra, while follow-up diffusion-weighted MRI (DWI) provides the definitive infarct outcome. However, single time-point segmentations fail to capture the biological heterogeneity and temporal evolution of stroke. We propose a bi-temporal analysis framework that characterizes ischemic tissue using statistical descriptors, radiomic texture features, and deep feature embeddings from two architectures (mJ-Net and nnU-Net). Bi-temporal refers to admission (T1) and post-treatment follow-up (T2). All features are extracted at T1 from CTP, with follow-up DWI aligned to ensure spatial correspondence. Manually delineated masks at T1 and T2 are intersected to construct six regions of interest (ROIs) encoding both initial tissue state and final outcome. Features were aggregated per region and analyzed in feature space. Evaluation on 18 patients with successful reperfusion demonstrated meaningful clustering of region-level representations. Regions classified as penumbra or healthy at T1 that ultimately recovered exhibited feature similarity to preserved brain tissue, whereas infarct-bound regions formed distinct groupings. Both baseline GLCM and deep embeddings showed a similar trend: penumbra regions exhibit features that are significantly different depending on final state, whereas this difference is not significant for core regions. Deep feature spaces, particularly mJ-Net, showed strong separation between salvageable and non-salvageable tissue, with a penumbra separation index that differed significantly from zero (Wilcoxon signed-rank test). These findings suggest that encoder-derived feature manifolds reflect underlying tissue phenotypes and state transitions, providing insight into imaging-based quantification of stroke evolution.


💡 Research Summary

The paper introduces a bi‑temporal framework for analyzing ischemic stroke evolution by jointly leveraging admission‑time computed tomography perfusion (CTP) and follow‑up diffusion‑weighted MRI (DWI). Traditional approaches rely on a single time‑point to delineate core and penumbra, which fails to capture the heterogeneous and dynamic nature of stroke tissue. To address this, the authors first acquire 4‑D CTP (30 time points) at admission (T1) and DWI 24–72 h later (T2). After rigid motion correction and brain extraction, the DWI volume is cropped to the CTP slab and registered to the CTP grid, ensuring voxel‑wise correspondence.

Manual segmentations by expert neuroradiologists produce four masks on CTP (core, penumbra, normal tissue, contralateral healthy tissue) and a final infarct mask on DWI. By intersecting the T1 and T2 masks, six bi‑temporal regions of interest (ROIs) are defined, representing all possible transitions (e.g., penumbra → recovered, penumbra → infarct, core → recovered, etc.). This ROI taxonomy enables a direct comparison of the same voxels’ baseline imaging characteristics with their eventual fate.

Four families of features are extracted from the CTP data for each ROI:

  1. Baseline statistical descriptors – a 3 × 3 × 30 sliding window yields six first‑order moments (mean, std, skewness, kurtosis, min, max). ROI‑wise vectors are obtained by max‑pooling across voxels.
  2. Radiomic texture (GLCM) – 3‑D gray‑level co‑occurrence matrices are computed (bin width = 8, 26‑neighbor connectivity). Four texture metrics (IMC1, IMC2, maximal correlation coefficient, and correlation) are retained after statistical screening.
  3. Deep CNN embeddings – mJ‑Net – a 2D + time segmentation network specifically designed for CTP core/penumbra segmentation. The network is used in inference mode; intermediate feature maps are averaged within each ROI to produce high‑dimensional embeddings.
  4. Deep CNN embeddings – nnU‑Net – a generic 2D segmentation architecture where the 30 time points are stacked as channels. As with mJ‑Net, intermediate activations are pooled per ROI.

For each patient (n = 18 recanalized large‑vessel‑occlusion cases) and each axial slice, the four feature vectors are concatenated, yielding compact representations of tissue state at T1. Dimensionality reduction (t‑SNE, UMAP) visualizes the distribution of ROI embeddings. Statistical analysis (Wilcoxon signed‑rank test) evaluates a “penumbra separation index” that quantifies how far penumbra‑to‑recovery and penumbra‑to‑infarct clusters are separated.

Key findings:

  • Penumbra heterogeneity: Penumbra voxels that later recovered (penumbra → healthy) cluster tightly with healthy contralateral tissue, whereas penumbra that progressed to infarction form a distinct cluster. This separation is evident across baseline, GLCM, and deep embeddings.
  • Core stability: Core voxels show little divergence between those that recover and those that do not, indicating that baseline core features are less predictive of eventual outcome.
  • Deep embeddings outperform texture: While GLCM features capture some differences, the mJ‑Net embeddings achieve a highly significant separation (p ≈ 1.5 × 10⁻⁴). The temporal‑aware architecture of mJ‑Net appears to encode subtle perfusion dynamics that are invisible to handcrafted texture metrics.
  • Clinical implication: The ability to predict tissue fate from admission‑time CTP alone suggests that deep feature manifolds could serve as biomarkers for treatment decision‑making, potentially extending therapeutic windows for patients with salvageable penumbra.

The authors acknowledge limitations: the cohort is small (18 patients), all from a single center, and the analysis is retrospective. Future work should validate the approach on larger, multi‑center datasets, integrate clinical variables (onset‑to‑treatment time, reperfusion scores), and develop real‑time pipelines for automatic ROI generation and feature extraction.

In summary, this study demonstrates that a bi‑temporal, multi‑modal feature extraction strategy—combining statistical, radiomic, and deep learning descriptors—can quantitatively capture the evolution of ischemic tissue. The findings highlight the promise of encoder‑derived feature spaces as surrogate phenotypes of tissue viability, paving the way for more nuanced, data‑driven stroke management.


Comments & Academic Discussion

Loading comments...

Leave a Comment