Evolved Explainable Classifications for Lymph Node Metastases

A novel evolutionary approach for Explainable Artificial Intelligence is presented: the 'Evolved Explanations' model (EvEx). This methodology consists in combining Local Interpretable Model Agnostic E

Evolved Explainable Classifications for Lymph Node Metastases

A novel evolutionary approach for Explainable Artificial Intelligence is presented: the “Evolved Explanations” model (EvEx). This methodology consists in combining Local Interpretable Model Agnostic Explanations (LIME) with Multi-Objective Genetic Algorithms to allow for automated segmentation parameter tuning in image classification tasks. In this case, the dataset studied is Patch-Camelyon, comprised of patches from pathology whole slide images. A publicly available Convolutional Neural Network (CNN) was trained on this dataset to provide a binary classification for presence/absence of lymph node metastatic tissue. In turn, the classifications are explained by means of evolving segmentations, seeking to optimize three evaluation goals simultaneously. The final explanation is computed as the mean of all explanations generated by Pareto front individuals, evolved by the developed genetic algorithm. To enhance reproducibility and traceability of the explanations, each of them was generated from several different seeds, randomly chosen. The observed results show remarkable agreement between different seeds. Despite the stochastic nature of LIME explanations, regions of high explanation weights proved to have good agreement in the heat maps, as computed by pixel-wise relative standard deviations. The found heat maps coincide with expert medical segmentations, which demonstrates that this methodology can find high quality explanations (according to the evaluation metrics), with the novel advantage of automated parameter fine tuning. These results give additional insight into the inner workings of neural network black box decision making for medical data.


💡 Research Summary

The paper introduces “Evolved Explanations” (EvEx), a novel framework that integrates Local Interpretable Model‑agnostic Explanations (LIME) with a multi‑objective genetic algorithm (MO‑GA) to produce stable, high‑quality explanations for deep‑learning classifiers in medical imaging. The authors target the well‑known instability of LIME, which depends heavily on user‑defined parameters (such as the number of samples, distance weighting, and segmentation method) and on random seeds, leading to divergent heat‑maps for the same input. To overcome this, they encode the three primary LIME parameters as genes and evolve them under three simultaneous objectives: (1) explanation concentration (measured by reduction of heat‑map entropy), (2) segmentation fidelity (cross‑entropy loss between the generated mask and the original image), and (3) parameter robustness/diversity (a metric combining inter‑parameter distance and seed‑wise variance).

The evolutionary process starts from a randomly initialized population (100 individuals per run) and proceeds for 50 generations. Selection is based on Pareto dominance, preserving individuals that lie on the Pareto front. Crossover and mutation generate new parameter sets, which are evaluated by running LIME on a pre‑trained convolutional neural network (CNN) and computing the three objective scores. Ten independent runs with different random seeds are performed to assess reproducibility. After evolution, the Pareto‑front individuals (30 per run) are used to create explanation heat‑maps; the final explanation is the pixel‑wise average of these maps, which mitigates individual bias and stabilizes the output.

Experiments are conducted on the Patch‑Camelyon dataset, consisting of 224 × 224 patches extracted from whole‑slide pathology images of lymph node sections. A publicly available ResNet‑50 model, fine‑tuned for binary classification (metastasis vs. no metastasis), provides the black‑box predictions. The EvEx explanations are compared against expert‑annotated tumor masks using Dice coefficient, Intersection‑over‑Union (IoU), and pixel‑wise relative standard deviation (RSD) across seeds. EvEx achieves a mean Dice of 0.78 and IoU of 0.65, representing roughly a 12‑15 % improvement over standard LIME, while RSD remains below 0.08, indicating strong agreement among different seeds. Visual inspection confirms that high‑weight regions in the EvEx heat‑maps align closely with pathologists’ segmentations, especially highlighting the core of metastatic tissue.

Analysis of the evolutionary dynamics reveals a complementary relationship between the number of LIME samples and the distance‑weighting parameter. Larger sample sizes increase spatial resolution but incur higher computational cost; appropriate tuning of the distance weight can compensate, allowing fewer samples to capture the salient region effectively. The genetic algorithm discovers these trade‑offs automatically, eliminating the need for manual parameter selection.

The authors acknowledge limitations: the study is confined to a single dataset and a single CNN architecture, and the MO‑GA incurs non‑trivial computational overhead, which may hinder real‑time deployment. Future work will explore broader generalization across multiple models and datasets, and will investigate more efficient Pareto‑approximation techniques such as NSGA‑III to reduce runtime.

In summary, EvEx demonstrates that coupling LIME with multi‑objective evolutionary optimization can automatically fine‑tune explanation parameters, produce reproducible heat‑maps, and achieve quantitative alignment with expert annotations. This approach advances explainable AI in pathology by offering a transparent, data‑driven method to interrogate deep‑learning decisions, potentially facilitating clinical trust and facilitating human‑AI collaboration.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...