Improved Separations of Regular Resolution from Clause Learning Proof Systems
We prove that the graph tautology formulas of Alekhnovich, Johannsen, Pitassi, and Urquhart have polynomial size pool resolution refutations that use only input lemmas as learned clauses and without degenerate resolution inferences. We also prove that these graph tautology formulas can be refuted by polynomial size DPLL proofs with clause learning, even when restricted to greedy, unit-propagating DPLL search. We prove similar results for the guarded, xor-fied pebbling tautologies which Urquhart proved are hard for regular resolution.
💡 Research Summary
The paper investigates the relative power of regular resolution and clause‑learning proof systems, providing new separations that are both theoretically robust and practically relevant. The authors focus on two families of hard propositional formulas: the graph tautologies introduced by Alekhnovich, Johannsen, Pitassi, and Urquhart (often abbreviated as GT) and the guarded xor‑fied pebbling tautologies (GX‑Pebbling) originally shown by Urquhart to be difficult for regular resolution.
First, the authors examine GT formulas under the pool resolution model, an extension of ordinary resolution that allows learned clauses to be stored in a “pool” and reused later. Crucially, they restrict the pool to contain only input lemmas—clauses that can be derived directly from the original axioms without any intermediate resolution steps. Moreover, they forbid degenerate resolution inferences, i.e., steps where one parent clause is a superset of the other. Despite these stringent constraints, they construct polynomial‑size pool resolution refutations for every GT instance. This demonstrates that even a very modest form of clause learning (input lemmas only) can dramatically outperform regular resolution, which requires exponential‑size proofs for the same formulas.
Next, the paper turns to DPLL‑based clause‑learning algorithms. The authors impose a “greedy” search discipline that always chooses the next decision literal so that unit propagation can be applied immediately, and they allow learning only of clauses that become unit after propagation. Under this restricted, yet realistic, strategy the GT formulas admit polynomial‑size refutations. The construction shows how learned clauses can be orchestrated to cut off large portions of the search space without needing sophisticated branching heuristics. This result bridges the gap between abstract proof‑complexity separations and the behavior of modern SAT solvers, which typically employ greedy, unit‑propagation‑driven search.
The third contribution extends the analysis to the guarded xor‑fied pebbling tautologies. These formulas embed a pebbling game on a directed acyclic graph and augment each pebble constraint with an exclusive‑or condition, guarded by additional literals that control when a clause may be used. Regular resolution is known to require exponential size proofs for GX‑Pebbling. By adapting the same pool‑resolution technique and the greedy DPLL learning scheme, the authors produce polynomial‑size proofs for these formulas as well. The key insight is that the xor constraints can be captured by a small set of input lemmas that, once learned, propagate efficiently through unit propagation, neutralizing the combinatorial explosion that plagues regular resolution.
Overall, the paper establishes three major points: (1) Pool resolution with only input lemmas and without degenerate steps is strictly stronger than regular resolution; (2) Greedy, unit‑propagation‑only DPLL with clause learning can refute GT formulas in polynomial time, showing that the theoretical advantage of clause learning survives under realistic solver restrictions; (3) The same techniques apply to guarded xor‑fied pebbling tautologies, confirming that the separation is not limited to a single family of formulas.
These findings have several implications. They provide a concrete, constructive demonstration that clause learning can simulate non‑regular resolution steps efficiently, suggesting new avenues for designing SAT solvers that exploit limited forms of learning while keeping implementation simple. Moreover, the work invites further investigation into which classes of input lemmas are most beneficial for different formula families, and how the prohibition of degenerate resolution influences proof size in broader contexts. By linking proof‑complexity separations to algorithmic strategies used in practice, the paper narrows the gap between theoretical lower bounds and the empirical performance of modern SAT solving technologies.
Comments & Academic Discussion
Loading comments...
Leave a Comment