FourierCSP: Differentiable Constraint Satisfaction Problem Solving by Walsh-Fourier Expansion
The Constraint-satisfaction problem (CSP) is fundamental in mathematics, physics, and theoretical computer science. Continuous local search (CLS) solvers, as recent advancements, can achieve highly competitive results on certain classes of Boolean satisfiability (SAT) problems. Motivated by these advances, we extend the CLS framework from Boolean SAT to general CSP with finite-domain variables and expressive constraint formulations. We present FourierCSP, a continuous optimization framework that generalizes the Walsh-Fourier transform to CSP, allowing for transforming versatile constraints to compact multilinear polynomials, thereby avoiding the need for auxiliary variables and memory-intensive encodings. We employ projected subgradient and mirror descent algorithms with provable convergence guarantees, and further combine them to accelerate gradient-based optimization. Empirical results on benchmark suites demonstrate that FourierCSP is scalable and competitive, significantly broadening the class of problems that can be efficiently solved by differentiable CLS techniques and paving the way toward end-to-end neurosymbolic integration.
💡 Research Summary
This paper introduces FourierCSP, a differentiable framework for solving general finite‑domain constraint satisfaction problems (CSPs). Traditional continuous local search (CLS) methods have achieved impressive results on Boolean SAT by exploiting the Walsh‑Fourier expansion, but they cannot directly handle multi‑valued variables or expressive constraints without first Boolean‑encoding the problem, which leads to a combinatorial explosion in dimensionality and memory usage. FourierCSP overcomes this limitation by extending the Walsh‑Fourier transform to arbitrary finite domains.
The authors first define indicator‑based Fourier bases for each variable’s domain and show that any Boolean‑valued constraint function can be uniquely expressed as a multilinear polynomial over these bases, with coefficients given by the expectation of the function times the basis. By relaxing each variable to lie in its probability simplex, they obtain a continuous search space that is the Cartesian product of all simplices. A randomized rounding operator maps a point in this continuous space back to a discrete assignment, and the expectation of the original constraint under this rounding yields a smooth multilinear objective function F(P). Crucially, they prove that the CSP is satisfiable if and only if the minimum of F(P) over the simplex product equals –|C|, where C is the set of constraints. Moreover, they establish that any interior point of the simplex product cannot be a local optimum; all minima lie on the boundary, guaranteeing that a rounded solution from a local minimum is meaningful.
To optimize F(P), the paper proposes three algorithms. Projected Gradient Descent (PGD) performs a Euclidean gradient step followed by projection onto the simplex product; with step size 1/L (L is the smoothness constant) it converges to an ε‑critical point in O(|C|·L/ε²) iterations. Mirror Descent (MD) uses negative entropy as the mirror map, yielding updates in the KL‑geometry of the simplices; with step size bounded by √(2Θ)·ρ/√T (Θ depends on domain sizes, ρ is a Lipschitz constant) it reaches an ε‑critical point in O(2Θ·ρ/ε²) iterations. Recognizing that PGD excels when gradients are large while MD is more stable when gradients are small, the authors introduce a Hybrid Descent (HD) that computes both PGD and MD candidates at each iteration and selects the one that yields the lower objective value. This hybrid scheme inherits the fast descent of PGD in steep regions and the robustness of MD near flat regions, and the paper provides theoretical convergence guarantees for each method.
Implementation builds on JAX‑OPT for automatic differentiation and integrates simplex projection and mirror‑map operations from the MIRROR‑DESCENT library. Default settings include a maximum of 500 gradient steps, tolerance 0.001, and back‑tracking line search for step‑size selection; optional FISTA acceleration is also supported.
Experimental evaluation covers a wide range of benchmark CSPs, including graph‑coloring, Sudoku‑style puzzles, scheduling problems, and global constraints such as AllDifferent and cardinality constraints. FourierCSP is compared against state‑of‑the‑art SAT‑based CLS solvers (FourierSAT, GRADSAT), integer linear programming solvers (CPLEX), and constraint programming systems (OR‑Tools). Results show that FourierCSP reaches the optimal objective (−|C|) or a negligible gap within a few hundred iterations on most instances, often with 2–10× faster gradient computation than Boolean‑encoded CLS and with substantially lower memory consumption. The hybrid descent consistently outperforms pure PGD or MD in terms of convergence speed and robustness across diverse problem families.
In summary, the contributions of the paper are: (1) a mathematically rigorous generalization of the Walsh‑Fourier expansion to finite‑domain CSPs, providing a compact multilinear representation without auxiliary variables; (2) provably convergent projected subgradient, mirror‑descent, and hybrid optimization algorithms tailored to the simplex product geometry; (3) an empirical demonstration that the resulting differentiable solver scales to realistic CSP sizes and competes with traditional SAT, ILP, and CP approaches. The work opens a path toward fully differentiable neurosymbolic systems where constraint reasoning can be embedded directly into end‑to‑end learning pipelines, and suggests future extensions such as learning constraint parameters and handling optimization objectives beyond pure satisfaction.
Comments & Academic Discussion
Loading comments...
Leave a Comment