An Algebraic Theory of Complexity for Discrete Optimisation
Discrete optimisation problems arise in many different areas and are studied under many different names. In many such problems the quantity to be optimised can be expressed as a sum of functions of a restricted form. Here we present a unifying theory of complexity for problems of this kind. We show that the complexity of a finite-domain discrete optimisation problem is determined by certain algebraic properties of the objective function, which we call weighted polymorphisms. We define a Galois connection between sets of rational-valued functions and sets of weighted polymorphisms and show how the closed sets of this Galois connection can be characterised. These results provide a new approach to studying the complexity of discrete optimisation. We use this approach to identify certain maximal tractable subproblems of the general problem, and hence derive a complete classification of complexity for the Boolean case.
💡 Research Summary
The paper develops a unified algebraic framework for analysing the computational complexity of finite‑domain discrete optimisation problems, specifically those that can be expressed as a weighted sum of rational‑valued functions (the so‑called weighted constraint satisfaction problems, or VCSPs). The authors introduce the notion of a weighted polymorphism, an extension of the classical polymorphism concept, which captures the idea that an operation preserves a set of cost functions up to a weighted average. Formally, a k‑ary weighted polymorphism f : D^k → D satisfies, for every cost function φ in a given set F, the inequality φ(f(x₁,…,x_k)) ≤ ∑_{i=1}^k w_i φ(x_i) where the weights w_i are non‑negative and sum to one. This definition allows one to speak about closure of a function set under such operations.
The central technical contribution is the establishment of a Galois connection between sets of rational‑valued functions and sets of weighted polymorphisms. For a function set F the authors define Pol(F) as the collection of all weighted polymorphisms that preserve every φ ∈ F; conversely, for a polymorphism set W they define Inv(W) as the set of all cost functions preserved by every w ∈ W. They prove that Pol and Inv are mutually inverse closure operators, i.e. Pol(Inv(W)) = cl(W) and Inv(Pol(F)) = cl(F). This Galois correspondence translates complexity questions about a VCSP into purely algebraic questions about the polymorphism clone that stabilises its cost functions.
Using this machinery, the authors identify a special class of weighted polymorphisms—energy‑decreasing operators—that guarantee polynomial‑time solvability. If Pol(F) contains such an operator, the associated VCSP can be solved by iteratively applying the operator to reduce the objective value, converging in polynomial time. Conversely, the absence of any energy‑decreasing polymorphism typically implies NP‑hardness, which the paper substantiates via reductions from classic hard problems (e.g., SAT, Max‑Cut).
The theory is then instantiated for the Boolean domain D = {0,1}. Because the domain is tiny, the authors can enumerate all possible weighted polymorphisms and determine precisely which clones admit an energy‑decreasing operator. Their classification yields two broad families:
-
Tractable families – these include submodular cost functions, affine (linear) functions, and certain convex/concave unary functions. Submodular functions are shown to be exactly those preserved by the “min” polymorphism, reproducing the well‑known polynomial‑time algorithms for submodular minimisation. Affine functions are closed under weighted averaging, and convex unary functions admit a simple averaging operator that monotonically reduces cost.
-
NP‑hard families – any clone that contains logical operations such as AND, OR, XOR, or non‑submodular, non‑convex functions (e.g., parity, majority) fails to contain an energy‑decreasing polymorphism, and the corresponding VCSPs are proved NP‑hard.
Beyond the dichotomy, the authors define max‑tractable subclones: for a given set of cost functions F, the largest superset of F that remains tractable. In the Boolean case the maximal tractable clone consists of all submodular, affine, and convex unary functions; adding any other function immediately pushes the problem into the NP‑hard regime.
The paper concludes with algorithmic implications. When an energy‑decreasing polymorphism exists, a generic local‑to‑global optimisation scheme can be employed: repeatedly replace a tuple of variable assignments by the result of the polymorphism, guaranteeing a monotone decrease in the total cost and convergence in polynomially many steps. For the intractable cases, the authors present standard hardness reductions, confirming that no polynomial‑time algorithm is expected unless P = NP.
In summary, the work makes four major contributions:
- It introduces weighted polymorphisms as a natural algebraic tool for VCSPs, extending the classical polymorphism theory.
- It establishes a rigorous Galois connection between cost functions and weighted polymorphisms, providing a clean characterisation of closed clones.
- It delivers a complete complexity classification for Boolean VCSPs, identifying the exact boundary between tractable and NP‑hard cases and describing maximal tractable subclones.
- It translates the algebraic insight into concrete algorithmic strategies for the tractable side, while reinforcing hardness for the remaining cases.
Overall, the paper offers a powerful algebraic lens for understanding discrete optimisation complexity, unifying many previously disparate results and opening avenues for further exploration of tractable fragments beyond the Boolean setting.
Comments & Academic Discussion
Loading comments...
Leave a Comment