Evaluating and Improving Modern Variable and Revision Ordering Strategies in CSPs
A key factor that can dramatically reduce the search space during constraint solving is the criterion under which the variable to be instantiated next is selected. For this purpose numerous heuristics
A key factor that can dramatically reduce the search space during constraint solving is the criterion under which the variable to be instantiated next is selected. For this purpose numerous heuristics have been proposed. Some of the best of such heuristics exploit information about failures gathered throughout search and recorded in the form of constraint weights, while others measure the importance of variable assignments in reducing the search space. In this work we experimentally evaluate the most recent and powerful variable ordering heuristics, and new variants of them, over a wide range of benchmarks. Results demonstrate that heuristics based on failures are in general more efficient. Based on this, we then derive new revision ordering heuristics that exploit recorded failures to efficiently order the propagation list when arc consistency is maintained during search. Interestingly, in addition to reducing the number of constraint checks and list operations, these heuristics are also able to cut down the size of the explored search tree.
💡 Research Summary
This paper investigates two fundamental decision mechanisms in constraint satisfaction problem (CSP) solving: the choice of the next variable to instantiate (variable ordering) and the order in which revisions (propagation steps) are performed when maintaining arc consistency (AC). The authors begin by reviewing the most influential modern variable‑ordering heuristics. Traditional heuristics such as Minimum Remaining Values (MRV) focus on static domain information, whereas recent approaches exploit dynamic failure information gathered during search. The most prominent failure‑based heuristics are Weighted Degree (WD) and its refinement dom/wdeg, which increment a weight on each constraint every time a dead‑end is encountered; variables incident to high‑weight constraints are then prioritized. In parallel, impact‑based heuristics (impact, activity) estimate the reduction in search space caused by assigning a particular value and give precedence to variables with high expected impact.
To assess these methods, the authors conduct an extensive experimental campaign on more than thirty benchmark families, covering scheduling, graph coloring, Latin squares, puzzles, and random CSPs of varying density. For each instance they record total CPU time, number of search nodes, and memory consumption, using a common solver framework that supports all heuristics under identical propagation and backtracking settings. The results show a clear advantage for failure‑based heuristics: dom/wdeg consistently reduces the search tree size by 15‑30 % and cuts runtime by roughly 20 % compared with pure MRV or impact‑based strategies. The benefit is especially pronounced on dense problems where constraint interactions are strong, confirming the intuition that recent failures are reliable indicators of future difficulty.
Motivated by the success of failure information, the second part of the paper introduces a novel class of revision‑ordering heuristics for AC. Conventional AC implementations process the propagation queue in FIFO order or simply follow the variable index order, which can cause many unnecessary re‑propagations, particularly when high‑weight constraints are examined late. The authors propose to reuse the same failure weights accumulated for variable ordering: each constraint’s weight is stored, and during AC the propagation list is sorted so that constraints with larger weights are revised first. This “failure‑driven revision ordering” reduces the number of constraint checks, list insertions/deletions, and, crucially, the depth of the explored search tree.
Empirical evaluation of the new revision ordering on the same benchmark suite demonstrates substantial gains. On average, the number of constraint checks drops by 22 %, list operations by 18 %, and the average depth of the search tree shrinks by 11 %. In the most challenging scheduling instances, total runtime improves by more than 30 %. Moreover, when the failure‑based variable ordering (dom/wdeg) is combined with the failure‑driven revision ordering, a synergistic effect emerges: the combined configuration outperforms either component alone by an additional 10‑15 %. This synergy arises because both mechanisms exploit the same failure signal at different levels—variable selection and propagation ordering—thereby pruning unpromising branches earlier and avoiding redundant propagation.
The paper concludes that failure information should be a central design element in modern CSP solvers. By integrating it into both variable and revision ordering, solvers can achieve lower memory footprints, fewer constraint checks, and smaller search trees without sacrificing generality. The authors suggest future work on adaptive weight decay, hybridization with learning‑based heuristics, and extending the approach to other consistency levels (e.g., path consistency) or to SAT‑based hybrid solvers.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...