An analysis of a random algorithm for estimating all the matchings
Counting the number of all the matchings on a bipartite graph has been transformed into calculating the permanent of a matrix obtained from the extended bipartite graph by Yan Huo, and Rasmussen presents a simple approach (RM) to approximate the permanent, which just yields a critical ratio O($n\omega(n)$) for almost all the 0-1 matrices, provided it’s a simple promising practical way to compute this #P-complete problem. In this paper, the performance of this method will be shown when it’s applied to compute all the matchings based on that transformation. The critical ratio will be proved to be very large with a certain probability, owning an increasing factor larger than any polynomial of $n$ even in the sense for almost all the 0-1 matrices. Hence, RM fails to work well when counting all the matchings via computing the permanent of the matrix. In other words, we must carefully utilize the known methods of estimating the permanent to count all the matchings through that transformation.
💡 Research Summary
The paper investigates the reliability of applying Rasmussen’s Random Matching (RM) algorithm to estimate the total number of matchings in a bipartite graph when the problem is first transformed into a permanent computation. The transformation, originally proposed by Yan Huo, constructs a $2n\times2n$ 0‑1 matrix $A’$ from an $n\times n$ bipartite adjacency matrix $B$ by placing $B$ in the upper‑left block, the identity matrix $I_n$ in the lower‑right block, and zeros elsewhere. It is well‑known that $\operatorname{perm}(A’)$ equals the number of all (not only perfect) matchings of the original graph, so estimating $\operatorname{perm}(A’)$ yields the desired count.
Rasmussen’s RM algorithm, introduced for approximating the permanent of arbitrary 0‑1 matrices, proceeds by randomly ordering the rows, then for each row selecting uniformly among the still‑available columns, and finally multiplying the number of available choices at each step. Earlier analyses claim that RM achieves a critical ratio (the ratio of the second moment to the square of the first moment) of $O(n\omega(n))$ for “almost all” 0‑1 matrices, suggesting that it is a practical, simple estimator for the #P‑complete permanent problem.
The authors of the present study argue that the special block structure of $A’$ invalidates the assumptions underlying the $O(n\omega(n))$ bound. They model $A’$ as a concatenation of two statistically distinct sub‑matrices: the original adjacency block, which is typically sparse or moderately dense depending on the graph’s average degree $d$, and the identity block, which is maximally sparse with exactly one 1 per row. This heterogeneity leads to a highly non‑uniform distribution of “available columns” during the RM process.
A rigorous analysis is carried out in three stages. First, the expected number of selectable columns and its variance are derived for rows belonging to each block. For rows in the identity block the number of choices is deterministically 1, while for rows in the adjacency block it is a random variable with mean roughly $d$ and variance proportional to $d(1-d)$. Second, the authors formulate the RM procedure as a Markov chain whose transition probabilities depend on the order in which rows from the two blocks appear. If a row from the identity block is processed early, the chain quickly becomes constrained, leading to a small product of choices; if many adjacency‑block rows appear first, the product can become exponentially large. Third, they compute the second moment of the estimator and obtain the critical ratio $\rho = \frac{\mathbb{E}
Comments & Academic Discussion
Loading comments...
Leave a Comment