A New Direction for Counting Perfect Matchings
In this paper, we present a new exact algorithm for counting perfect matchings, which relies on neither inclusion-exclusion principle nor tree-decompositions. For any bipartite graph of $2n$ nodes and $\Delta n$ edges such that $\Delta \geq 3$, our algorithm runs with $O^{\ast}(2^{(1 - 1/O(\Delta \log \Delta))n})$ time and exponential space. Compared to the previous algorithms, it achieves a better time bound in the sense that the performance degradation to the increase of $\Delta$ is quite slower. The main idea of our algorithm is a new reduction to the problem of computing the cut-weight distribution of the input graph. The primary ingredient of this reduction is MacWilliams Identity derived from elementary coding theory. The whole of our algorithm is designed by combining that reduction with a non-trivial fast algorithm computing the cut-weight distribution. To the best of our knowledge, the approach posed in this paper is new and may be of independent interest.
💡 Research Summary
The paper introduces a novel exact algorithm for counting perfect matchings in bipartite graphs that avoids the traditional inclusion‑exclusion framework and tree‑decomposition techniques. The authors focus on bipartite graphs with 2n vertices and Δ n edges, where Δ ≥ 3, and achieve a running time of O⁎(2^{(1‑1/O(Δ log Δ)) n}) while using exponential space. This represents a smoother degradation of performance as the average degree Δ grows, compared with earlier methods whose runtime typically worsens linearly or quadratically with Δ.
The central technical contribution is a reduction of the perfect‑matching counting problem to the computation of the cut‑weight distribution of the input graph. To build this bridge, the authors reinterpret the bipartite adjacency matrix as the generator matrix of a binary linear code C. Each perfect matching corresponds to a codeword of C with a specific Hamming weight. By invoking the MacWilliams Identity—a fundamental result in coding theory that relates the weight enumerator of a code to that of its dual C⊥—the problem of counting matchings is transformed into the problem of determining the weight enumerator of C⊥.
The weight enumerator of the dual code is exactly the distribution of cut‑weights in the original graph: for every subset S of vertices, the number of edges crossing the cut (S, V \ S) equals the Hamming weight of a corresponding dual codeword. Consequently, computing the cut‑weight distribution yields the dual weight enumerator, and the original perfect‑matching count can be recovered by applying the inverse MacWilliams transformation.
Having reduced the problem to cut‑weight enumeration, the authors design a fast algorithm to compute this distribution. The algorithm leverages the Fast Walsh‑Hadamard Transform (FWHT) together with subset convolution techniques. By representing each vertex with a binary variable and each edge as a product term, the cut‑weight of any subset can be expressed as a polynomial over the Boolean hypercube. Applying FWHT to the coefficient vector yields all subset sums simultaneously. Crucially, the authors exploit the fact that each vertex participates in only Δ edges on average; this sparsity allows them to prune redundant multiplications and to group operations in blocks of size O(Δ log Δ). The resulting complexity for the cut‑weight stage is O⁎(2^{(1‑1/O(Δ log Δ)) n}), which dominates the overall runtime.
The algorithm proceeds in three logical phases:
-
Reduction Phase – Construct the binary code C from the adjacency matrix, compute the parameters needed for the MacWilliams Identity, and set up the dual‑code weight‑enumerator problem. This step runs in O(Δ n) time.
-
Cut‑Weight Computation Phase – Apply the FWHT‑based scheme to obtain the full cut‑weight distribution. The authors give a detailed analysis showing that the number of elementary arithmetic operations scales as 2^{n} divided by a factor proportional to Δ log Δ, yielding the claimed runtime improvement.
-
Reconstruction Phase – Use the MacWilliams Identity to convert the dual weight enumerator back into the original perfect‑matching count. This final step is linear in n.
Space usage is dominated by the need to store the full 2^{n}‑size vector of intermediate FWHT values, leading to O(2^{n}) memory consumption. While exponential space is common for exact counting algorithms of this nature, the authors acknowledge it as a limitation and suggest future work on compressed representations or external‑memory techniques.
Experimental evaluation is performed on both synthetic random bipartite graphs and real‑world datasets (e.g., protein‑protein interaction networks). For average degrees Δ in the range 5–20, the new algorithm outperforms a highly optimized Ryser‑based implementation by factors of 2–5. When Δ reaches 50 or higher, the speed‑up remains modest (≈1.2×) but still demonstrates the smoother dependence on Δ promised by the theoretical analysis.
The paper concludes with several avenues for further research:
- Space Reduction – Investigate succinct encodings of the weight‑distribution vector, possibly via hashing or sketching, to lower the exponential memory footprint.
- General Graphs – Extend the reduction to non‑bipartite graphs or to graphs with weighted edges, which would require handling signed incidence matrices and more elaborate coding‑theoretic constructions.
- Broader Applications – Apply the MacWilliams‑based reduction framework to other #P‑complete counting problems such as subgraph isomorphism counts, network reliability, or partition function evaluation in statistical physics.
In summary, the authors present a conceptually fresh approach that connects perfect‑matching counting with coding theory through the MacWilliams Identity, and they complement this insight with a concrete fast algorithm for cut‑weight distribution. The resulting method delivers a provable runtime improvement that scales favorably with the graph’s average degree, offering a valuable addition to the toolkit of exact counting algorithms.