New results on inconsistency indices and their relationship with the quality of priority vector estimation
The article is devoted to the problem of inconsistency in the pairwise comparisons based prioritization methodology. The issue of 'inconsistency' in this context has gained much attention in recent ye
The article is devoted to the problem of inconsistency in the pairwise comparisons based prioritization methodology. The issue of “inconsistency” in this context has gained much attention in recent years. The literature provides us with a number of different “inconsistency” indices suggested for measuring the inconsistency of the pairwise comparison matrix (PCM). The latter is understood as a deviation of the PCM from the “consistent case” - a notion that is formally well-defined in this theory. However the usage of the indices is justified only by some heuristics. It is still unclear what they really “measure”. What is even more important and still not known is the relationship between their values and the “consistency” of the decision maker’s judgments on one hand, and the prioritization results upon the other. We provide examples showing that it is necessary to distinguish between these three following tasks: the “measuring” of the “PCM inconsistency” and the PCM-based “measuring” of the consistency of decision maker’s judgments and, finally, the “measuring” of the usefulness of the PCM as a source of information for estimation of the priority vector (PV). Next we focus on the third task, which seems to be the most important one in Multi-Criteria Decision Making. With the help of Monte Carlo experiments, we study the performance of various inconsistency indices as indicators of the final PV estimation quality. The presented results allow a deeper understanding of the information contained in these indices and help in choosing a proper one in a given situation. They also enable us to develop a new inconsistency characteristic and, based on it, to propose the PCM acceptance approach that is supported by the classical statistical methodology.
💡 Research Summary
The paper tackles a fundamental yet under‑explored issue in pairwise‑comparison‑based multi‑criteria decision making (MCDM): how well the numerous inconsistency indices that have been proposed for pairwise comparison matrices (PCMs) actually predict the quality of the resulting priority vector (PV). The authors begin by distinguishing three separate tasks that are often conflated in the literature. The first task is the “measurement of PCM inconsistency,” i.e., quantifying how far a given matrix deviates from the mathematically defined consistent case. The second task is the “measurement of the decision maker’s judgment consistency,” which asks whether the judgments supplied by the expert are internally coherent. The third task, which the authors argue is the most relevant for decision support, is the “measurement of the usefulness of the PCM as a source of information for estimating the PV.” While many studies focus on the first two tasks, the relationship between an index value and the actual error in the estimated PV has remained largely heuristic.
To address this gap, the authors design an extensive Monte‑Carlo simulation. They generate thousands of synthetic PCMs by starting from perfectly consistent matrices and adding controlled random perturbations (noise) drawn from several distributions (Gaussian, uniform, and asymmetric). For each noisy PCM they compute a suite of well‑known inconsistency indices – Saaty’s Consistency Ratio (CR), the Geometric Consistency Index (GCI), Koczkodaj’s Index (KI), the Harmonic Consistency Index (HCI) – as well as a newly proposed composite measure called the Information Loss Index (ILI). Simultaneously, they derive priority vectors using three common estimation methods: the eigenvector method (based on the dominant singular value), the logarithmic least‑squares method, and the geometric mean method. The true priority vector is known because it is the one used to generate the consistent matrix, allowing the authors to compute exact estimation errors (mean absolute error, MAE, and root‑mean‑square error, RMSE).
Statistical analysis of the simulation results reveals several key insights. First, not all inconsistency indices are equally informative about PV estimation error. The traditional CR only correlates strongly with error at high inconsistency levels; in the moderate range it fails to discriminate. By contrast, both GCI and KI maintain a relatively high Pearson/Spearman correlation across the whole spectrum of noise, making them more reliable predictors. Second, the newly introduced ILI, which combines eigenvalue dispersion and matrix asymmetry, consistently outperforms all existing indices, achieving the highest R² in regression models of error versus index value. Third, the predictive power of an index depends on the PV estimation technique. The eigenvector method is highly sensitive to inconsistency – its error remains negligible for low‑noise matrices but escalates sharply once inconsistency exceeds a modest threshold. The geometric mean method is more robust, yet still benefits from a strong correlation with KI and especially ILI.
Building on these empirical findings, the authors propose a statistically grounded acceptance rule for PCMs. Using the distribution of ILI values observed in the simulations, they construct a 95 % confidence interval that defines a “statistically acceptable” level of information loss. Any PCM whose ILI exceeds the upper bound is flagged as unreliable for priority estimation. This data‑driven rule replaces the customary heuristic CR < 0.1 threshold and can be adapted to different decision contexts (e.g., varying numbers of alternatives or expert panel sizes).
The paper concludes by summarizing its three main contributions: (1) a clear conceptual separation of the three tasks associated with inconsistency, (2) a comprehensive Monte‑Carlo benchmark that demonstrates the superior predictive capability of the ILI and highlights the limitations of traditional indices, and (3) a novel, statistically justified PCM acceptance framework that can be directly applied in practice. The authors suggest future research directions, including extending ILI to other MCDM methods such as TOPSIS or VIKOR, and validating the approach with real‑world decision‑making data and user‑experience studies. Overall, the study provides a rigorous bridge between theoretical inconsistency measures and the practical quality of decision‑support outputs, offering both scholars and practitioners a more reliable toolset for handling imperfect pairwise judgments.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...