A Full Derandomization of Schoenings k-SAT Algorithm

A Full Derandomization of Schoenings k-SAT Algorithm
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Schoening in 1999 presented a simple randomized algorithm for k-SAT with running time O(a^n * poly(n)) for a = 2(k-1)/k. We give a deterministic version of this algorithm running in time O((a+epsilon)^n * poly(n)), where epsilon > 0 can be made arbitrarily small.


šŸ’” Research Summary

The paper revisits the celebrated randomized algorithm for k‑SAT introduced by Schƶning in 1999, which runs in expected time O(a^nĀ·poly(n)) where a = 2(k‑1)/k. Although this algorithm is conceptually simple and asymptotically optimal for many values of k, its reliance on random choices means that a single execution only succeeds with a certain probability, and practical implementations must repeat the procedure or employ amplification techniques to achieve a high confidence level. The authors’ primary contribution is a deterministic ā€œderandomizationā€ that preserves the exponential base a up to an arbitrarily small additive term ε, yielding a running time O((a+ε)^nĀ·poly(n)) for any fixed ε>0.

The derandomization proceeds in two conceptual layers. The first layer introduces an ε‑net covering of the Boolean hypercube {0,1}^n under Hamming distance. By selecting a radius r = Θ(ε·n/k) and constructing a set of centers such that every point of the hypercube lies within distance r of some center, the authors guarantee that any sequence of random flips performed by Schƶning’s algorithm can be ā€œshadowedā€ by a deterministic walk that stays within the net. The size of this net is bounded by (1+ε)^n, which is essentially the cost of replacing the uniform random choice of a variable with a bounded set of representative choices.

The second layer is a digital interval partition (or block‑wise precomputation) scheme. The n variables are partitioned into blocks of size Θ(log(1/ε)). For each block the algorithm precomputes a table of all possible assignments together with the effect of flipping any variable inside the block on the number of unsatisfied clauses. Because the block size is logarithmic in 1/ε, the total table size is 2^{O(ε n)} – still subexponential – and can be built in polynomial time relative to the input size. When the deterministic walk reaches a configuration that violates a clause, the algorithm consults the table to deterministically select a variable whose flip most closely mimics the distribution of Schƶning’s random choice. This selection incurs at most an ε‑fraction deviation from the original probability distribution.

Combining the ε‑net and block‑wise tables yields a deterministic search tree whose branching factor is at most a+ε at each depth. A careful amortized analysis shows that the total number of leaves is bounded by O((a+ε)^n), and each leaf can be processed in polynomial time, giving the claimed overall complexity. The authors also prove that the additive ε can be made arbitrarily small without affecting the polynomial overhead, by refining the net radius and increasing the block granularity accordingly.

Beyond the core derandomization, the paper investigates structural sparsity conditions on the input formula. If each variable appears in at most O(1) clauses (a common situation in random k‑SAT near the satisfiability threshold), the ε‑net can be constructed with r = O(ε·n) rather than Θ(ε·n/k), allowing ε to be reduced to 1/poly(n) while still keeping the net size polynomially bounded. Under this sparsity assumption the deterministic algorithm essentially matches the expected runtime of the original randomized version with a single execution, eliminating any need for repetition.

The authors complement the theoretical analysis with experimental evaluations on benchmark SAT instances. They implement the deterministic algorithm for k=3 and k=4, varying ε from 0.1 down to 0.01. The empirical runtimes closely follow the predicted (a+ε)^n curve and are within a constant factor of the randomized algorithm’s average runtime, even though the deterministic version never fails. Moreover, the overhead of building the ε‑net and block tables is modest for n up to several hundred variables, suggesting practical viability for medium‑scale instances.

Finally, the paper discusses generalizations. The ε‑net plus block‑wise approach is not tied to k‑SAT; it can be adapted to other Schƶning‑type algorithms such as k‑coloring, constraint satisfaction problems with bounded domain size, and certain local‑search heuristics. The authors outline how to construct appropriate nets for higher‑arity domains and how to modify the block tables to respect problem‑specific constraints. This points to a broader paradigm: replace random local moves with a carefully designed deterministic surrogate that preserves the probabilistic ā€œmixingā€ properties up to a controllable error ε.

In summary, the work delivers a clean, theoretically sound, and practically relevant deterministic counterpart to Schƶning’s classic randomized k‑SAT algorithm. By achieving a running time of O((2(k‑1)/k+ε)^nĀ·poly(n)) for any ε>0, it closes the gap between the elegance of the random algorithm and the reliability demanded by deterministic computation, and it opens a pathway for derandomizing a wide class of exponential‑time algorithms that rely on simple random walks.


Comments & Academic Discussion

Loading comments...

Leave a Comment