A Review: Expert System for Diagnosis of Myocardial Infarction
A computer Program Capable of performing at a human-expert level in a narrow problem domain area is called an expert system. Management of uncertainty is an intrinsically important issue in the design of expert systems because much of the information in the knowledge base of a typical expert system is imprecise, incomplete or not totally reliable. In this paper, the author present s the review of past work that has been carried out by various researchers based on development of expert systems for the diagnosis of cardiac disease
💡 Research Summary
The paper provides a comprehensive review of expert‑system research aimed at diagnosing myocardial infarction (MI). It begins by defining an expert system as a computer program that can perform at a human‑expert level within a narrow domain and emphasizes why such systems are particularly valuable in acute cardiac care, where rapid and accurate diagnosis directly influences patient outcomes. The authors then outline the two fundamental components of any expert system: the knowledge base, which encodes clinical expertise, guidelines, and empirical data; and the inference engine, which applies this knowledge to patient‑specific inputs such as electrocardiogram (ECG) waveforms, cardiac biomarkers, and symptom descriptions.
The review categorizes existing MI‑diagnostic systems into five principal families. The first family consists of classic rule‑based systems that employ IF‑THEN statements derived from cardiology textbooks and expert consensus. While these systems are transparent and easy to validate, they suffer from scalability problems as the rule set expands and from limited ability to handle ambiguous or incomplete data. The second family incorporates fuzzy‑logic techniques. By translating linguistic descriptors (“slightly elevated,” “markedly abnormal”) into fuzzy membership functions, these systems can model the inherent vagueness of clinical reasoning. However, the design of membership functions remains subjective, and performance is highly dependent on the quality of the fuzzy rule set.
The third family utilizes probabilistic graphical models, primarily Bayesian networks. These models explicitly represent conditional dependencies among variables (e.g., the relationship between ST‑segment deviation and troponin levels) and compute posterior probabilities that quantify diagnostic confidence even when some inputs are missing or noisy. Studies cited in the review report area‑under‑the‑curve (AUC) values above 0.85 for Bayesian approaches on internal validation sets.
The fourth family leverages artificial neural networks (ANNs) and deep‑learning architectures. Convolutional neural networks (CNNs) have been applied to raw ECG traces, while recurrent neural networks (RNNs) handle time‑series biomarker data. These data‑driven models can discover complex, non‑linear patterns that elude handcrafted rules, achieving state‑of‑the‑art performance on large, labeled datasets. Nevertheless, the “black‑box” nature of deep models raises concerns about interpretability and regulatory acceptance, prompting researchers to explore saliency maps and other explainable‑AI (XAI) techniques.
The fifth family comprises hybrid systems that combine two or more of the above paradigms. Representative examples include a fuzzy‑preprocessing stage followed by Bayesian inference, and a rule‑based engine augmented with ANN‑derived high‑level features. Hybrid designs aim to capitalize on the interpretability of rule‑based or probabilistic methods while exploiting the pattern‑recognition strength of neural networks. Empirical results indicate that such combinations can improve diagnostic accuracy by 5–10 % relative to single‑method baselines and can reduce clinician decision time by roughly one‑third in pilot deployments.
Performance evaluation across the surveyed literature predominantly relies on sensitivity, specificity, overall accuracy, and ROC‑AUC. Most studies report internal cross‑validation results, but external validation on multi‑center cohorts is scarce, limiting generalizability. The authors also note methodological gaps such as insufficient handling of data quality issues (noise, missing values), lack of standardized reporting, and limited assessment of long‑term clinical impact.
Beyond technical considerations, the paper discusses regulatory, ethical, and usability challenges. Patient privacy, especially under data‑sharing constraints, motivates the exploration of federated learning and differential privacy. Transparency and accountability are addressed through emerging XAI frameworks that provide clinicians with rationale for each automated decision. Integration with existing hospital information systems remains a practical hurdle; the authors advocate for standardized APIs and interoperable data models (e.g., HL7 FHIR) to facilitate seamless workflow incorporation.
In concluding, the authors outline future research directions: (1) assembling large, diverse, multi‑institutional datasets to enable robust external validation; (2) developing continual‑learning mechanisms that allow models to adapt to evolving clinical guidelines without catastrophic forgetting; (3) advancing explainable‑AI methods tailored to cardiology to satisfy both clinicians and regulators; and (4) co‑designing user interfaces that present probabilistic outputs in an intuitive manner, thereby fostering trust and adoption. The review thus serves as both a state‑of‑the‑art snapshot and a roadmap for the next generation of MI diagnostic expert systems.