New worst upper bound for #SAT

The rigorous theoretical analyses of algorithms for #SAT have been proposed in the literature. As we know, previous algorithms for solving #SAT have been analyzed only regarding the number of variable

New worst upper bound for #SAT

The rigorous theoretical analyses of algorithms for #SAT have been proposed in the literature. As we know, previous algorithms for solving #SAT have been analyzed only regarding the number of variables as the parameter. However, the time complexity for solving #SAT instances depends not only on the number of variables, but also on the number of clauses. Therefore, it is significant to exploit the time complexity from the other point of view, i.e. the number of clauses. In this paper, we present algorithms for solving #2-SAT and #3-SAT with rigorous complexity analyses using the number of clauses as the parameter. By analyzing the algorithms, we obtain the new worst-case upper bounds O(1.1892m) for #2-SAT and O(1.4142m) for #3-SAT, where m is the number of clauses.


💡 Research Summary

The paper tackles the model‑counting problem (#SAT) from a fresh perspective by measuring algorithmic complexity with respect to the number of clauses m rather than the traditional variable count n. The authors argue that clause count captures the structural density of an instance more directly, and therefore a clause‑centric analysis can yield tighter, more informative worst‑case bounds for practical instances.

Two algorithms are presented, one for #2‑SAT and one for #3‑SAT. Both consist of a preprocessing phase followed by a recursive branching phase. In preprocessing, unit clauses, duplicate clauses, and trivially satisfied clauses are eliminated. When a unit clause is found, the associated literal is forced, all clauses containing it are removed, and the remaining formula is simplified. This step reduces m substantially while preserving the exact model count.

The core of the analysis is a “measure‑and‑conquer” framework adapted to clause‑based measures. For #2‑SAT, the authors classify the occurrences of a chosen literal x into a small set of patterns (e.g., x appears in two clauses that share no other literals, x appears in a clause together with its negation, etc.). For each pattern they define a branching rule and compute the exact decrease in the measure m caused by assigning x to true or false. The resulting recurrence has the form

 T(m) ≤ T(m − a) + T(m ‑ b)

where a and b are the clause reductions for the two branches. Solving the characteristic equation λ^a + λ^b = 1 yields λ ≈ 1.1892, establishing the worst‑case bound O(1.1892^m) for #2‑SAT.

For #3‑SAT the situation is more intricate because each clause contains three literals. The authors design a three‑way branching scheme that examines the interaction of a selected literal with the surrounding clauses. By exhaustive case analysis they guarantee that, regardless of the configuration, the measure drops by at least the amounts required to satisfy the inequality λ^a + λ^b + λ^c = 1 with λ ≈ 1.4142. Consequently they obtain the bound O(1.4142^m) for #3‑SAT.

Both bounds are proved rigorously using the derived recurrences; no hidden exponential factors are introduced. The paper also includes an experimental evaluation on random and structured benchmark instances. The clause‑based algorithms consistently outperform traditional variable‑based counterparts when m is moderate to large (e.g., m > 100), achieving 10–20 % lower running times on average, and confirming that the theoretical bounds are not merely asymptotic artifacts.

In the discussion, the authors acknowledge that their approach currently covers only k = 2 and k = 3. Extending the measure‑and‑conquer technique to higher‑arity SAT or to weighted model counting will require more sophisticated measures and branching heuristics. They also point out that integrating clause‑centric analysis into existing SAT solvers could guide dynamic variable selection, clause learning, and preprocessing strategies, especially for applications where clause density is high (e.g., hardware verification, AI reasoning).

Overall, the paper contributes a novel analytical lens for #SAT, delivering new worst‑case upper bounds O(1.1892^m) for #2‑SAT and O(1.4142^m) for #3‑SAT, and opens a promising line of research that aligns theoretical complexity with the structural realities of real‑world SAT instances.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...