Uncovering Bugs in Formal Explainers: A Case Study with PyXAI
Reading time: 1 minute
...
📝 Original Info
- Title: Uncovering Bugs in Formal Explainers: A Case Study with PyXAI
- ArXiv ID: 2511.03169
- Date: 2025-11-05
- Authors: 논문에 명시된 저자 정보가 제공되지 않았습니다. 해당 정보를 확인하려면 원문을 참고하시기 바랍니다.
📝 Abstract
Formal explainable artificial intelligence (XAI) offers unique theoretical guarantees of rigor when compared to other non-formal methods of explainability. However, little attention has been given to the validation of practical implementations of formal explainers. This paper develops a novel methodology for validating formal explainers and reports on the assessment of the publicly available formal explainer PyXAI. The paper documents the existence of incorrect explanations computed by PyXAI on most of the datasets analyzed in the experiments, thereby confirming the importance of the proposed novel methodology for the validation of formal explainers.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.