Fast Balanced Partitioning is Hard, Even on Grids and Trees

Fast Balanced Partitioning is Hard, Even on Grids and Trees
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Two kinds of approximation algorithms exist for the k-BALANCED PARTITIONING problem: those that are fast but compute unsatisfying approximation ratios, and those that guarantee high quality ratios but are slow. In this paper we prove that this tradeoff between runtime and solution quality is necessary. For the problem a minimum number of edges in a graph need to be found that, when cut, partition the vertices into k equal-sized sets. We develop a reduction framework which identifies some necessary conditions on the considered graph class in order to prove the hardness of the problem. We focus on two combinatorially simple but very different classes, namely trees and solid grid graphs. The latter are finite connected subgraphs of the infinite 2D grid without holes. First we use the framework to show that for solid grid graphs it is NP-hard to approximate the optimum number of cut edges within any satisfying ratio. Then we consider solutions in which the sets may deviate from being equal-sized. Our framework is used on grids and trees to prove that no fully polynomial time algorithm exists that computes solutions in which the sets are arbitrarily close to equal-sized. This is true even if the number of edges cut is allowed to increase the more stringent the limit on the set sizes is. These are the first bicriteria inapproximability results for the problem.


💡 Research Summary

**
The paper investigates the fundamental trade‑off between running time, cut‑size approximation, and balance of part sizes in the k‑BALANCED PARTITIONING problem. In this problem a graph G=(V,E) with n vertices must be partitioned into k parts S₁,…,S_k such that each part contains at most d·n/k vertices (d≥1) and the number of edges crossing between different parts (the cut size) is minimized. This formulation captures many practical scenarios, for example load balancing in parallel finite‑element simulations, VLSI layout, image segmentation, and divide‑and‑conquer algorithms, where both communication cost (cut size) and load balance (part sizes) are critical.

State of the art
Two families of approximation algorithms are known. The first family runs in near‑linear time but achieves only weak approximation ratios (often O(log n) or O(log k)). The second family guarantees a constant‑factor approximation of the cut size but requires super‑quadratic running time (e.g., O(n³) for planar graphs). For planar meshes arising from 2‑D finite‑element methods the graph is connected and has bounded degree, so the strong hardness results for general (possibly disconnected) graphs do not directly apply. Moreover, for trees with bounded degree there exist O(log (n/k))‑approximation algorithms, suggesting that the difficulty might be limited to “hard” graph topologies.

Main contribution
The authors prove that the observed trade‑off is not an artifact of current techniques but a genuine complexity barrier. They develop a generic reduction from the strongly NP‑hard 3‑PARTITION problem to k‑BALANCED PARTITIONING. For a given 3‑PARTITION instance (integers a₁,…,a_{3k} and target sum s with s/4 < a_i < s/2 and Σa_i = k s), they construct a graph consisting of 3k “gadgets”. Gadget i contains p·a_i vertices; the value of p depends on the target graph class (e.g., p = 2 for general graphs, p proportional to √a_i for solid grids, p = 2 for bounded‑degree trees). The gadgets are linked by m edges (m may be zero). The construction satisfies two crucial properties:

  1. Cut‑size bound – Any partition that cuts at most α·m edges can have fewer than p − ε n “minority” vertices (vertices whose colour appears in less than half of their gadget). This forces each gadget to be monochromatic when the cut budget is small.
  2. Balance bound – If a partition respects the size constraint (each part ≤ (1+ε)·d·n/k), then each gadget belongs entirely to a single part, and the colours of the gadgets define a partition of the original integers into k groups.

If an algorithm A can, for any such constructed graph, output a partition that respects the size bound and whose cut size is within factor α of the optimum, then A can be used to decide whether the original 3‑PARTITION instance has a solution: a feasible 3‑PARTITION corresponds to a perfectly balanced cut of size m, while the absence of a solution forces any near‑balanced cut to exceed α·m edges, contradicting the algorithm’s guarantee. Consequently, unless P = NP, no fully polynomial‑time algorithm (running in time polynomial in n and 1/ε) can achieve simultaneously:

  • a cut‑size approximation α = n^{c}/ε^{d} with a constant c < ½ for solid grid graphs,
  • a cut‑size approximation α = n^{c}/ε^{d} with a constant c < 1 for trees,
  • any finite α for general (possibly disconnected) graphs when near‑balanced partitions are required.

Hardness for solid grid graphs
Solid grid graphs are finite, hole‑free subgraphs of the infinite 2‑D integer lattice. Their isoperimetric properties imply that separating a set of Θ(a_i) vertices requires Ω(√a_i) edges. By choosing each gadget as a rectangular block of size proportional to √a_i × √a_i and linking the blocks with a linear number of edges, the authors obtain p ≈ √a_i and ε ≈ (2 k s)^{-1}. The reduction shows that for any constant c < ½, achieving α = n^{c}/ε^{d} in polynomial time would solve 3‑PARTITION, which is impossible unless P = NP. This improves earlier results that only ruled out α = n^{c} for c < 1.

Hardness for trees
For trees the gadgets are stars with a centre of degree a_i. Cutting a star requires at least a_i edges, so the cut‑size budget again forces each gadget to stay intact. With p = 2 and the same ε as above, the reduction yields that any algorithm achieving α = n^{c}/ε^{d} with c < 1 cannot run in fully polynomial time unless P = NP. This matches known APX‑hardness for unrestricted trees but extends it to the bicriteria setting.

Implications and tightness
The paper also provides matching approximation algorithms that achieve the proven bounds up to constant factors, showing that the hardness results are asymptotically tight. For solid grids, an O(log k)‑approximation with part sizes at most (1+ε)·d·n/k can be obtained in O(n^{1.5}) time, matching the lower bound on α up to the exponent on n. For trees, an O(log (n/k))‑approximation exists for bounded‑degree trees, again aligning with the hardness threshold.

Practical relevance
The findings explain why popular partitioning tools such as METIS or Scotch, which aim for near‑perfect balance while offering no theoretical cut‑size guarantees, cannot be replaced by a provably fast algorithm with comparable balance guarantees. Any attempt to simultaneously reduce ε (the allowed imbalance) and keep α small inevitably leads to super‑polynomial running time or to a cut size that grows dramatically.

Conclusion
The authors establish that the trade‑off between runtime, cut‑size approximation, and balance is inherent to k‑BALANCED PARTITIONING, even on graph families that are structurally simple and highly regular, such as solid grids and trees. By leveraging a versatile reduction from 3‑PARTITION, they prove bicriteria inapproximability results that are stronger than previously known and that hold for the first time on these two disparate classes. The work closes a gap in the literature, showing that fast algorithms cannot achieve both high‑quality cuts and near‑perfect balance unless a major breakthrough in complexity theory occurs.


Comments & Academic Discussion

Loading comments...

Leave a Comment