Phase Selection Heuristics for Satisfiability Solvers

Phase Selection Heuristics for Satisfiability Solvers
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In general, a SAT Solver based on conflict-driven DPLL consists of variable selection, phase selection, Boolean Constraint Propagation, conflict analysis, clause learning and its database maintenance. Optimizing any part of these components can enhance the performance of a solver. This paper focuses on optimizing phase selection. Although the ACE (Approximation of the Combined lookahead Evaluation) weight is applied to a lookahead SAT solver such as March, so far, no conflict-driven SAT solver applies successfully the ACE weight, since computing the ACE weight is time-consuming. Here we apply the ACE weight to partial phase selection of conflict-driven SAT solvers. This can be seen as an improvement of the heuristic proposed by Jeroslow-Wang (1990). We incorporate the ACE heuristic and the existing phase selection heuristics in the new solver MPhaseSAT, and select a phase heuristic in a way similar to portfolio methods. Experimental results show that adding the ACE heuristic can improve the conflict-driven solvers. Particularly on application instances, MPhaseSAT with the ACE heuristic is significantly better than MPhaseSAT without the ACE heuristic, and even can solve a few SAT instances that remain unsolvable so far.


💡 Research Summary

The paper addresses the phase‑selection component of modern conflict‑driven DPLL SAT solvers, a relatively under‑explored area compared with variable selection, clause learning, or BCP. The authors observe that while look‑ahead solvers such as March and MoRsat successfully employ the Approximation of the Combined lookahead Evaluation (ACE) weight to guide decisions, conflict‑driven solvers have avoided ACE because its computation is expensive. To bridge this gap, the authors propose a hybrid approach that applies ACE only in limited, low‑cost situations and combines it with the more traditional Jeroslow‑Wang (JW) and RSA‑T heuristics already used in solvers like PrecoSAT.

ACE is defined as follows: for a literal x, the solver temporarily assigns x = 0 and x = 1, performs iterative unit propagation, and then evaluates the reduced formula. For each clause (CNF or XOR) that contains x, a weight is assigned based on the size of the clause after propagation:

  • CNF weight: (W_{CNF}(n) = 5·2^{-n})
  • XOR weight: (W_{XOR}(n) = 5·0.85^{n})
    The total ACE score of a literal is the sum of these weights over all affected clauses. The phase with the larger ACE score is selected. This mirrors JW’s idea of preferring the polarity with larger weight but incorporates richer structural information (clause size after propagation and XOR clauses).

Because ACE requires a full unit‑propagation pass for each candidate polarity, its runtime overhead is significant. The authors therefore restrict ACE to shallow search depths (≤ 30) where the decision tree is still small, and fall back to JW (or RSA‑T) for deeper levels. This “dynamic” use of ACE makes the heuristic practical for large industrial instances.

A second contribution is a lightweight portfolio‑style instance classifier that decides which phase‑selection heuristic to use for a given problem. Instead of training a complex regression model on hundreds of features (as done in SATzilla or Borg‑sat), the authors select eight inexpensive, mostly static features:

  1. Number of clauses (#c)
  2. Number of variables (#v)
  3. Clause‑to‑variable ratio (#c/#v)
  4. Mean conflict depth from probing (E(#d))
  5. Number of unassigned variables after probing (U(#v))
  6. Number of binary clauses (#bin)
  7. Number of XOR clauses (#xor)
  8. Number of clauses of size ≥ 9 (L(#c))

Based on empirical observations, they define a set of threshold‑based rules that map feature ranges to a specific phase‑selection strategy. For example, if #c < 18 000 or E(#d) < 30, ACE is enabled; if #xor = 0 and #c/#v > 100 with #v < 1500, the solver uses a “Tail‑JW” strategy (JW only in the last 20 depths); otherwise the default is the combined PrecoSAT heuristic (JW + RSA‑T). The rules are deliberately simple, aiming for low overhead and easy adaptation to new domains.

The resulting solver, named MPhaseSAT, integrates seven phase‑selection policies:

  1. Pure JW
  2. ACE (depth ≤ 30)
  3. JW + RSA‑T (the PrecoSAT policy)
  4. PrecoSAT with “tail” JW (JW only near the search frontier)
  5. ACE + PrecoSAT (ACE for the first 30 000 decisions, then PrecoSAT)
  6. PrecoSAT + random flips (similar to CryptoMiniSat)
  7. Local‑search‑based phase (using the state of a local search engine such as TNM)

The authors evaluate MPhaseSAT on six unsatisfiable benchmark instances drawn from the SAT‑2009 competition (application, random, and crafted categories). Table 1 shows that for several application instances (e.g., cub‑h13‑unsat, sc‑hup‑l2s‑bc56s‑1‑k391) the ACE‑enabled configuration solves the problem significantly faster than the baseline PrecoSAT policy. In random and crafted instances the benefit is less pronounced, and in some cases ACE even slows down the solver, confirming the need for selective activation.

Key observations from the experiments:

  • ACE improves performance primarily on industrial‑style (application) instances where clause structure is richer and the early search space benefits from more informed polarity choices.
  • The depth limit (30) effectively caps the overhead; beyond this limit, JW’s lightweight computation dominates and prevents slowdown.
  • The simple rule‑based classifier, despite its crudeness, yields a noticeable overall speed‑up without requiring expensive offline training.
  • MPhaseSAT with ACE solves a few instances that were unsolvable by the baseline PrecoSAT solver, demonstrating that better phase decisions can break through hard search barriers.

The paper concludes that phase selection is a fertile ground for performance gains in SAT solving. By borrowing look‑ahead information (ACE) and applying it judiciously, one can obtain a hybrid heuristic that outperforms existing methods on a significant subset of benchmarks. The authors suggest future work in three directions: (1) developing more sophisticated, possibly learned, cost‑benefit models to decide when ACE should be invoked; (2) extending the feature set and classification mechanism to automatically adapt to new benchmark families; and (3) exploring ACE‑style weighting for other solver components such as variable selection or clause deletion. Overall, the study provides a compelling case that integrating structural, look‑ahead‑derived metrics into conflict‑driven solvers, when done with careful cost management, can yield tangible improvements.


Comments & Academic Discussion

Loading comments...

Leave a Comment