The complexity of weighted and unweighted #CSP

The complexity of weighted and unweighted #CSP
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We give some reductions among problems in (nonnegative) weighted #CSP which restrict the class of functions that needs to be considered in computational complexity studies. Our reductions can be applied to both exact and approximate computation. In particular, we show that a recent dichotomy for unweighted #CSP can be extended to rational-weighted #CSP.


💡 Research Summary

The paper investigates the computational complexity of counting constraint satisfaction problems (#CSP) when constraints are equipped with non‑negative weights. It begins by formalizing weighted #CSP as a family of instances defined over a set 𝔽 of non‑negative functions, each function assigning a weight to a tuple of variable assignments. The central difficulty in analyzing such problems stems from the enormous variety of possible weight functions; previous dichotomy results for unweighted #CSP could not be directly applied because the presence of weights changes both exact counting and approximation behavior.

The authors introduce a suite of polynomial‑time reductions that systematically simplify the function set 𝔽 without altering the essential counting or approximation properties of the problem. Three core transformations are presented:

  1. Scaling Normalization – multiplying every function by a common constant factor. This changes the overall weight of each assignment by a predictable factor (cⁿ for n variables) but leaves the decision of tractability unchanged.
  2. Multiplicative Decomposition – expressing a high‑arity function as a product of lower‑arity functions and introducing auxiliary constraints to capture the same contribution. This reduces the arity of functions while preserving the total weight.
  3. Boolean Masking – converting arbitrary non‑negative weights into 0‑1 values by introducing a mask function that records whether a tuple is “allowed”. The original weight is then recovered by a separate unary weight function. The combination yields an equivalent instance with only Boolean (indicator) constraints.

All three operations are computable in polynomial time and are exact‑preserving (the exact count of solutions remains identical) as well as AP‑preserving (approximation ratios are unchanged). Consequently, any weighted #CSP can be reduced to an instance whose constraints belong to a much smaller “core” family of functions.

The core function set identified by the authors consists of:

  • Constant functions 0 and 1,
  • Binary relations that are either symmetric or essentially unary (i.e., depend on at most one variable),
  • Unary functions with rational weights.

The paper proves that for any non‑negative weighted #CSP, there exists a polynomial‑time Turing reduction to a problem that uses only functions from this core set. Therefore, the complexity landscape of weighted #CSP can be studied by focusing exclusively on this restricted family.

From the approximation perspective, the authors show that the reductions are approximation‑preserving (AP‑reductions). If a weighted #CSP admits a Fully Polynomial Randomized Approximation Scheme (FPRAS), then the reduced unweighted instance also admits an FPRAS, and vice‑versa. Conversely, if the reduced instance is AP‑hard (i.e., as hard to approximate as any problem in #P), the original weighted problem inherits this hardness. This bridges the gap between exact counting and approximate counting for weighted models.

The most significant theoretical contribution is the extension of a recent dichotomy for unweighted #CSP to the rational‑weighted setting. Prior work (e.g., Bulatov, Dyer‑Richter) classified unweighted #CSP into two categories: tractable (solvable in polynomial time) when all constraint relations belong to certain algebraic classes such as affine, bijunctive, or Horn, and #P‑complete otherwise. By applying the core reductions, the authors demonstrate that the same dichotomy holds for rational‑weighted #CSP: if every function in the core set is affine or bijunctive, the counting problem remains polynomial‑time solvable even with rational weights; if any function falls outside these classes, the problem becomes #P‑complete for exact counting and AP‑hard for approximation.

The paper also discusses the limits of this extension. The results rely on the weights being rational numbers; extending to arbitrary real or algebraic numbers would require handling issues of numerical precision and continuity that are not addressed here. Nonetheless, the authors outline a roadmap for such generalizations, suggesting that the core reduction framework could be adapted with additional analytic tools.

In summary, the paper makes three key advances:

  1. Reduction Framework – a set of polynomial‑time, exact‑ and AP‑preserving transformations that collapse the infinite space of non‑negative weight functions to a finite, well‑understood core.
  2. Complexity Preservation – proof that both exact and approximate complexities are invariant under these reductions, allowing results from unweighted #CSP to be transferred to weighted versions.
  3. Dichotomy Extension – a rigorous extension of the known unweighted #CSP dichotomy to rational‑weighted #CSP, establishing a clear boundary between tractable and intractable cases.

These contributions provide a powerful toolkit for researchers studying counting problems with weights, simplify the landscape of weighted #CSP, and open avenues for future work on more general weight domains and on algorithmic techniques that exploit the identified core structure.


{# ── Original Paper Viewer ── #}

{# ── Comment Section (BOTTOM) ── #}

Comments & Academic Discussion

Loading comments...

Leave a Comment