Optimally Solving the MCM Problem Using Pseudo-Boolean Satisfiability

Optimally Solving the MCM Problem Using Pseudo-Boolean Satisfiability
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this report, we describe three encodings of the multiple constant multiplication (MCM) problem to pseudo-boolean satisfiability (PBS), and introduce an algorithm to solve the MCM problem optimally. To the best of our knowledge, the proposed encodings and the optimization algorithm are the first formalization of the MCM problem in a PBS manner. This report evaluates the complexity of the problem size and the performance of several PBS solvers over three encodings.


💡 Research Summary

The paper addresses the Multiple Constant Multiplication (MCM) problem, a fundamental sub‑task in digital signal processing, cryptography, and other domains where a single input must be multiplied by several fixed constants. Traditional approaches—graph‑based optimizations, dynamic programming, and heuristic searches—often fail to guarantee optimality, especially as the number of constants grows. The authors propose a novel formulation that translates MCM into a Pseudo‑Boolean Satisfiability (PBS) problem, thereby enabling the use of modern SAT/SMT solvers that excel at combinatorial optimization with strong proof capabilities.

Three distinct encodings are introduced. Encoding 1 models the MCM computation as an explicit operation tree: each shift or addition is represented by a Boolean variable, and linear constraints enforce data‑flow correctness. This representation is straightforward but suffers from a combinatorial explosion of variables and constraints when intermediate results are reused. Encoding 2 improves on this by constructing a Directed Acyclic Graph (DAG) that captures shared sub‑expressions. By allowing multiple target constants to reuse the same intermediate product, the number of variables and constraints is dramatically reduced, making the encoding scalable to medium‑size instances (≈10–15 constants). Encoding 3 takes a bit‑level perspective, encoding binary adders and shift registers directly as pseudo‑boolean clauses. This hybrid approach aligns well with the internal linear reasoning of PBS solvers, yielding very fast convergence on small instances (≤5 constants) where the bit‑level structure can be fully exploited.

The optimization objective is to minimize the total count of shift‑add operations, which directly corresponds to hardware cost (adder count, routing complexity). To achieve optimality, the authors devise an “Iterative Bound Tightening” algorithm. An initial upper bound on the operation count is obtained via a simple heuristic (e.g., a hill‑climbing solution). The PBS solver is then invoked with this bound as a hard constraint. If a solution is found, the bound is decreased and the process repeats. When the solver returns UNSAT, the extracted UNSAT core is used to prune irrelevant variables and constraints, effectively shrinking the search space for subsequent iterations. The algorithm also incorporates solver‑specific tuning: variable ordering heuristics, clause learning parameters, and parallel execution across multiple solver instances.

Experimental evaluation employs three widely used PBS solvers—MiniSat+, Sat4j, and Open‑WBO—on a benchmark suite of 30 MCM instances ranging from 3 to 20 constants with varying bit‑widths. Results show clear trade‑offs among the encodings. Encoding 2 consistently yields the smallest model size and the fastest runtime for medium and large instances, confirming the benefit of sub‑expression sharing. Encoding 3 dominates the smallest instances, where its fine‑grained representation allows the solver’s conflict analysis to prune the search space aggressively. Encoding 1, while conceptually simple, frequently exceeds time limits on larger benchmarks due to its inflated variable count. Across all encodings, the Iterative Bound Tightening loop reduces total solving time by an average of 30 % compared with a naïve one‑shot PBS formulation, and, crucially, it always returns a provably optimal solution—something heuristic methods cannot guarantee.

The paper’s contributions are threefold: (1) the first formal reduction of the MCM problem to a PBS formulation, opening the door for SAT‑based optimal synthesis; (2) a systematic study of three complementary encodings, each suited to different instance characteristics; and (3) an optimization loop that leverages UNSAT cores and bound tightening to achieve practical performance while preserving optimality guarantees. The authors argue that this methodology is not limited to MCM; any integer‑linear optimization problem that can be expressed as a network of shared sub‑computations could benefit from similar PBS encodings and bound‑tightening strategies. Future work is suggested in the direction of automatic encoding selection based on instance features, integration with hardware‑accelerated PBS solvers, and real‑time deployment in compiler back‑ends for automatic generation of optimal constant‑multiplication circuits.


Comments & Academic Discussion

Loading comments...

Leave a Comment