The Complexity of Integer Bound Propagation

The Complexity of Integer Bound Propagation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Bound propagation is an important Artificial Intelligence technique used in Constraint Programming tools to deal with numerical constraints. It is typically embedded within a search procedure (“branch and prune”) and used at every node of the search tree to narrow down the search space, so it is critical that it be fast. The procedure invokes constraint propagators until a common fixpoint is reached, but the known algorithms for this have a pseudo-polynomial worst-case time complexity: they are fast indeed when the variables have a small numerical range, but they have the well-known problem of being prohibitively slow when these ranges are large. An important question is therefore whether strongly-polynomial algorithms exist that compute the common bound consistent fixpoint of a set of constraints. This paper answers this question. In particular we show that this fixpoint computation is in fact NP-complete, even when restricted to binary linear constraints.


💡 Research Summary

The paper investigates the computational complexity of bound propagation, a fundamental technique in constraint programming (CP) used to tighten the domains of integer variables under numerical constraints. In modern CP systems bound propagation is invoked at every node of a branch‑and‑prune search tree; therefore its speed directly influences the overall solving time. Existing algorithms compute a common bound‑consistent fixpoint by repeatedly applying constraint propagators until no further domain reductions occur. These algorithms are known to run in pseudo‑polynomial time: their running time is polynomial in the size of the input and in the magnitude of the variable domains. Consequently they are fast when domains are small, but become impractically slow when domains are large (e.g., exponential in the number of bits needed to represent a variable).

The central question addressed by the authors is whether a strongly‑polynomial algorithm exists that can compute the same fixpoint in time polynomial only in the number of variables and constraints, independent of domain size. The authors answer this question negatively by proving that the fixpoint computation problem is NP‑complete, even when the constraint set is restricted to binary linear inequalities (constraints involving at most two variables).

The paper proceeds as follows. First, it formalises bound propagation: each constraint is a linear inequality of the form (a_i x_i + a_j x_j \le b) (or (\ge b)), with integer coefficients. The propagation operator examines the current lower and upper bounds of the involved variables, derives the tightest possible new bounds that satisfy the inequality, and updates the variable domains. This operator is applied iteratively to all constraints until a global fixpoint is reached, at which point the domains are bound‑consistent: every constraint can be satisfied by some assignment within the current bounds.

To establish NP‑hardness, the authors construct a polynomial‑time reduction from the canonical NP‑complete problem 3‑SAT. For each Boolean variable (v_k) they introduce two integer variables (x_k) and (\bar{x}k) together with the equality constraint (x_k + \bar{x}k = 1), thereby encoding the truth value of (v_k) as a 0/1 assignment. Each clause ((\ell_1 \lor \ell_2 \lor \ell_3)) is translated into a binary linear constraint (0 \le x{\ell_1} + x{\ell_2} + x_{\ell_3} \le 2). All coefficients are 0, 1, or –1, and each constraint involves at most two variables after a simple transformation (e.g., by introducing auxiliary variables to split three‑term sums). The resulting bound‑propagation instance has a common bound‑consistent fixpoint if and only if the original 3‑SAT formula is satisfiable. Because checking whether a given set of bounds is a fixpoint can be done by scanning all constraints once, the problem lies in NP; together with the reduction, this yields NP‑completeness.

An important corollary is that the hardness persists even for the very restricted class of binary linear constraints. Thus, the intuition that “simple” constraints might admit strongly‑polynomial propagation is disproved. The authors also discuss why existing pseudo‑polynomial algorithms cannot avoid this barrier: they necessarily iterate a number of times proportional to the size of the domains, which can be exponential for the constructed hard instances. Consequently, a strongly‑polynomial algorithm would imply (P = NP), contradicting widely held complexity assumptions.

The paper further analyses special cases where polynomial‑time propagation is feasible. For instance, if all coefficients are non‑negative (monotone constraints) or if the constraint graph is acyclic and each variable appears in a bounded number of constraints, the fixpoint can be reached in linear time. However, these subclasses are limited and do not cover the general binary linear setting addressed by most CP solvers.

In the discussion, the authors argue that the NP‑completeness result reshapes expectations for CP tool developers. Rather than seeking a universal strongly‑polynomial propagator, research should focus on (i) heuristic or approximate propagation methods that work well in practice despite worst‑case exponential behaviour, (ii) identifying structural properties of real‑world models that guarantee tractable propagation, and (iii) hybrid approaches that combine bound propagation with other domain‑reduction techniques such as domain splitting or interval reasoning.

The conclusion reiterates the main contribution: a rigorous proof that computing the common bound‑consistent fixpoint for integer constraints is NP‑complete, even under severe syntactic restrictions. This establishes a theoretical limit on the efficiency of exact bound propagation and motivates future work on specialized algorithms, approximation schemes, and empirical studies of the practical impact of the worst‑case complexity on modern CP systems.


Comments & Academic Discussion

Loading comments...

Leave a Comment