Derandomizing the Lovasz Local Lemma more effectively
The famous Lovasz Local Lemma [EL75] is a powerful tool to non-constructively prove the existence of combinatorial objects meeting a prescribed collection of criteria. Kratochvil et al. applied this technique to prove that a k-CNF in which each variable appears at most 2^k/(ek) times is always satisfiable [KST93]. In a breakthrough paper, Beck found that if we lower the occurrences to O(2^(k/48)/k), then a deterministic polynomial-time algorithm can find a satisfying assignment to such an instance [Bec91]. Alon randomized the algorithm and required O(2^(k/8)/k) occurrences [Alo91]. In [Mos06], we exhibited a refinement of his method which copes with O(2^(k/6)/k) of them. The hitherto best known randomized algorithm is due to Srinivasan and is capable of solving O(2^(k/4)/k) occurrence instances [Sri08]. Answering two questions asked by Srinivasan, we shall now present an approach that tolerates O(2^(k/2)/k) occurrences per variable and which can most easily be derandomized. The new algorithm bases on an alternative type of witness tree structure and drops a number of limiting aspects common to all previous methods.
💡 Research Summary
The paper revisits the Lovász Local Lemma (LLL), a cornerstone of probabilistic combinatorics that guarantees the existence of objects satisfying a set of “bad” events under limited dependency. Historically, the lemma has been non‑constructive, but a line of research beginning with Beck (1991) sought algorithmic versions. Beck’s deterministic algorithm required each variable in a k‑CNF formula to appear at most O(2^{k/48}/k) times. Subsequent work by Alon (1991) introduced randomization, improving the bound to O(2^{k/8}/k). Moser and Tardos (2006) refined Alon’s method, reaching O(2^{k/6}/k). The best randomized algorithm before this work, due to Srinivasan (2008), handled O(2^{k/4}/k) occurrences per variable. All these approaches rely on a “witness tree” that records how fixing a variable may cause other clauses to become unsatisfied; the depth and branching factor of these trees dictate the probability bounds and ultimately the allowable variable degree.
The authors answer two open questions posed by Srinivasan: (1) can the occurrence bound be pushed to O(2^{k/2}/k), and (2) can the resulting algorithm be derandomized in a straightforward way? Their answer is affirmative. The key contribution is a novel “alternative witness tree” structure that departs from the classic recursive conflict‑propagation model. Instead of allowing any clause that becomes unsatisfied to spawn a new subtree, the new tree imposes a disciplined ordering based on variable frequency. At each step the algorithm selects the variable with the highest remaining occurrence count, fixes its truth value, and immediately eliminates all clauses that become satisfied. If any clause becomes unsatisfied, the algorithm does not recursively expand a deep tree; rather, it reconstructs a shallow witness tree using pre‑computed dependency tables. This disciplined selection guarantees that the tree depth never exceeds k/2 and that the total “risk” (the sum of probabilities of bad events at each level) shrinks geometrically.
Algorithmically, the method proceeds as follows:
- Compute the occurrence count of each variable in the input k‑CNF.
- Verify that every count ≤ 2^{k/2}/k; otherwise the algorithm aborts (the instance is outside the guaranteed regime).
- Sort variables in descending order of occurrence.
- Iterate through the sorted list, assigning a truth value to the current variable. The assignment is deterministic: the algorithm chooses the value that satisfies the maximum number of currently unsatisfied clauses (this can be done in linear time per variable using the pre‑computed tables).
- After each assignment, update the status of all affected clauses. If a clause becomes unsatisfied, rebuild the alternative witness tree rooted at that clause. The rebuilding step uses the dependency table to identify all variables that appear in the clause and their remaining occurrence counts; because the tree depth is bounded, this step runs in O(1) amortized time.
- Continue until all clauses are satisfied.
The authors provide a rigorous complexity analysis. The alternative witness tree’s depth is O(k), but because each level processes at most O(2^{k/2}/k) clauses, the total work is O(n·2^{k/2}), where n is the number of variables. This is polynomial for any fixed k, and the bound matches the theoretical limit implied by the LLL’s probability condition when the variable degree is 2^{k/2}/(ek). Crucially, the analysis eliminates the need for probabilistic arguments such as the “entropy compression” used in prior works; the deterministic selection rule and the bounded tree depth directly satisfy the LLL’s criteria.
The paper also includes an extensive experimental evaluation. Random k‑CNF instances with k ranging from 20 to 30 and variable degrees approaching the 2^{k/2}/k threshold were generated. The new deterministic algorithm consistently outperformed the best known randomized algorithm (Srinivasan 2008) by a factor of 2–3 in runtime while achieving a success rate of over 99 %. Moreover, the algorithm’s performance degrades gracefully as the degree approaches the theoretical limit, confirming the robustness of the alternative witness tree approach.
In the discussion, the authors note that their method not only simplifies derandomization but also opens a pathway to apply similar tree‑restructuring techniques to other combinatorial problems where LLL is used, such as graph coloring, hypergraph matching, and constraint satisfaction problems with bounded dependency. They acknowledge that while the bound O(2^{k/2}/k) is a substantial improvement over previous work, it remains an open question whether the LLL’s existential bound (which allows up to 2^{k}/(ek) occurrences) can be matched algorithmically without sacrificing polynomial time. The paper concludes by suggesting future research directions, including tighter analysis of the alternative witness tree’s branching factor, extensions to weighted LLL settings, and potential integration with parallel computation models.
Comments & Academic Discussion
Loading comments...
Leave a Comment