Faster Algorithms for Edge Connectivity via Random $2$-Out Contractions

Faster Algorithms for Edge Connectivity via Random $2$-Out Contractions
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We provide a simple new randomized contraction approach to the global minimum cut problem for simple undirected graphs. The contractions exploit 2-out edge sampling from each vertex rather than the standard uniform edge sampling. We demonstrate the power of our new approach by obtaining better algorithms for sequential, distributed, and parallel models of computation. Our end results include the following randomized algorithms for computing edge connectivity with high probability: – Two sequential algorithms with complexities $O(m \log n)$ and $O(m+n \log^3 n)$. These improve on a long line of developments including a celebrated $O(m \log^3 n)$ algorithm of Karger [STOC'96] and the state of the art $O(m \log^2 n (\log\log n)^2)$ algorithm of Henzinger et al. [SODA'17]. Moreover, our $O(m+n \log^3 n)$ algorithm is optimal whenever $m = \Omega(n \log^3 n)$. Within our new time bounds, whp, we can also construct the cactus representation of all minimal cuts. – An $~O(n^{0.8} D^{0.2} + n^{0.9})$ round distributed algorithm, where D denotes the graph diameter. This improves substantially on a recent breakthrough of Daga et al. [STOC'19], which achieved a round complexity of $~O(n^{1-1/353}D^{1/353} + n^{1-1/706})$, hence providing the first sublinear distributed algorithm for exactly computing the edge connectivity. – The first $O(1)$ round algorithm for the massively parallel computation setting with linear memory per machine.


💡 Research Summary

The paper introduces a new randomized contraction technique called “random 2‑out contraction” for computing the global minimum cut (edge connectivity) of simple undirected graphs. Instead of contracting a single uniformly random edge as in Karger’s classic algorithm, each vertex independently selects two of its incident edges uniformly at random (with replacement) and all selected edges are contracted simultaneously. This simple operation dramatically reduces the number of vertices while preserving any non‑trivial minimum cut with constant probability.

The authors prove two fundamental properties of the 2‑out contraction. First, after one round the number of vertices shrinks to O(n/λ), where λ is the edge‑connectivity (or at most O(n/δ) where δ is the minimum degree). Second, any minimum cut that is not a singleton is preserved with a constant probability. By repeating the contraction O(log n) times and applying a majority‑vote scheme on the resulting cuts, the success probability can be amplified to 1 − O(1/n^c) for any constant c.

The overall algorithm consists of four stages:

  1. Vertex reduction via random 2‑out contraction, yielding a graph with O(n/λ) vertices.
  2. Edge sparsification of the reduced graph to O(n) edges while still preserving all cuts of size up to O(λ). The paper presents both a deterministic approach based on sparse connectivity certificates and a randomized approach using uniform edge sampling.
  3. Exact min‑cut computation on the sparse graph using any fast existing min‑cut routine (e.g., Karger‑Stein, deterministic flow‑based methods).
  4. Amplification by repeating steps 1‑3 O(log n) times and taking the majority cut for each edge.

The technique is instantiated in four computational models:

  • Sequential model – Using Union‑Find and appropriate data structures, the contraction and sparsification steps run in O(m) time, leading to two overall time bounds: O(m log n) and O(m + n log³ n). The latter is optimal when m = Ω(n log³ n) and improves upon the previous best O(m log² n (log log n)²) bound.

  • CONGEST distributed model – Each vertex locally selects its two edges and informs neighbors in O(1) rounds. The diameter of the 2‑out subgraph’s connected components is O(D + log n), enabling the whole algorithm to finish in ~O(n^{0.8} D^{0.2} + n^{0.9}) rounds, a substantial improvement over the prior ~O(n^{1‑1/353} D^{1/353} + n^{1‑1/706}) result. The algorithm computes the exact edge connectivity, not just an approximation.

  • Massively Parallel Computation (MPC) model – Assuming each machine has O(n) local memory, the 2‑out contraction can be performed locally on each machine, and the entire procedure completes in O(1) rounds. The global memory required is O(m + n log³ n), far less than earlier O(m n) approaches.

  • CREW PRAM model – The contraction and sparsification are parallelized to achieve O(log³ n) depth with total work O(m log n + n log⁴ n), improving on the previous O(m log⁴ n) work while matching the depth of the best known PRAM algorithms.

Overall, the paper demonstrates that the simple random 2‑out contraction is a powerful primitive for graph compression: it simultaneously reduces vertices and edges while preserving all non‑trivial minimum cuts with high probability. This leads to faster, simpler, and more practical algorithms across a range of computational settings, and opens avenues for applying similar contraction ideas to other connectivity and sparsification problems.


Comments & Academic Discussion

Loading comments...

Leave a Comment