A Branch-and-Reduce Algorithm for Finding a Minimum Independent Dominating Set
An independent dominating set D of a graph G = (V,E) is a subset of vertices such that every vertex in V D has at least one neighbor in D and D is an independent set, i.e. no two vertices of D are a
An independent dominating set D of a graph G = (V,E) is a subset of vertices such that every vertex in V \ D has at least one neighbor in D and D is an independent set, i.e. no two vertices of D are adjacent in G. Finding a minimum independent dominating set in a graph is an NP-hard problem. Whereas it is hard to cope with this problem using parameterized and approximation algorithms, there is a simple exact O(1.4423^n)-time algorithm solving the problem by enumerating all maximal independent sets. In this paper we improve the latter result, providing the first non trivial algorithm computing a minimum independent dominating set of a graph in time O(1.3569^n). Furthermore, we give a lower bound of \Omega(1.3247^n) on the worst-case running time of this algorithm, showing that the running time analysis is almost tight.
💡 Research Summary
The paper tackles the classic combinatorial optimization problem of finding a Minimum Independent Dominating Set (MIDS) in an undirected graph G = (V, E). An independent dominating set D ⊆ V must satisfy two constraints simultaneously: (i) every vertex not in D has at least one neighbor in D (domination), and (ii) no two vertices of D are adjacent (independence). The decision version of the problem is NP‑complete, and the optimization version is NP‑hard, which makes exact algorithms of particular interest for theoretical study.
Historically, the fastest known exact algorithm for MIDS was based on enumerating all maximal independent sets (MIS) and selecting the smallest one that also dominates the graph. Using the classic Bron–Kerbosch style enumeration, this approach runs in O(1.4423ⁿ) time, where n = |V|. While this bound is already exponential, it leaves a substantial gap compared to the trivial O(2ⁿ) brute‑force bound, suggesting that more refined branching techniques could improve the exponent.
The authors introduce a branch‑and‑reduce framework that systematically shrinks the instance by applying a collection of reduction rules before each recursive branching step. The rules are designed to exploit local structural properties of the graph, such as low‑degree vertices, star‑shaped subgraphs, short paths, triangles, and higher‑degree vertices. Six principal reduction rules are defined:
- Degree‑1 rule – If a vertex v has degree 1 with neighbor u, then u must belong to any independent dominating set; both v and u can be removed, and the solution size is incremented by 1.
- Star rule – For a star centered at c with k leaves, either c is chosen (removing the whole star) or all leaves are chosen (removing the star and adding k to the solution).
- 2‑path rule – For a path (x–y) where x and y have no other neighbors, branch on selecting x or y.
- Triangle rule – For a 3‑clique, branch on picking one vertex (removing the triangle) versus excluding all three and forcing domination from outside.
- High‑degree rule – For a vertex of degree ≥ 4, branch on including it (removing it and its neighbors) or excluding it (forcing all its neighbors to be in the solution).
- Component‑compression rule – When two subgraphs share a single articulation vertex, they can be merged into a “compressed” vertex with an adjusted degree, reducing the overall size of the graph.
Each rule either directly adds vertices to the partial solution or reduces the graph while preserving the existence of an optimal solution. After applying all applicable reductions, the algorithm selects a branching vertex (typically the one that yields the best measure decrease) and recurses on the two resulting subinstances.
To analyze the running time, the authors introduce a measure function μ(G) = Σ_{v∈V} w_{deg(v)}, where w_i are non‑negative real weights assigned to vertices of degree i. The weights are chosen by solving a linear program that minimizes the largest branching factor subject to constraints that each reduction rule decreases μ by at least a prescribed amount. For example, the degree‑1 rule reduces μ by w_1 + w_Δ (Δ being the maximum degree in the current instance). By carefully calibrating the weights, the authors ensure that every recursive call reduces μ by a factor that leads to a recurrence of the form T(μ) ≤ T(μ – a) + T(μ – b), where a and b are the measure decreases of the two branches. Solving the associated characteristic equation yields the dominant root α ≈ 1.3569. Consequently, the overall worst‑case running time is bounded by O(αⁿ) = O(1.3569ⁿ), which improves the previous best bound by roughly 6 %.
In addition to the upper bound, the paper establishes a matching lower bound for the algorithm’s worst‑case behavior. By constructing a family of 3‑regular graphs (and slight variations) where the most “unfavorable” reduction rule is the high‑degree rule, the authors show that the measure decreases at the slowest possible rate. Analyzing the corresponding recurrence yields a lower bound of Ω(βⁿ) with β ≈ 1.3247. Since β is close to α, the analysis is shown to be almost tight; any substantial improvement would require new reduction rules or a fundamentally different approach.
The experimental section implements the algorithm in C++ and evaluates it on random graphs of varying densities as well as on standard benchmark instances from the DIMACS suite. Empirical results confirm the theoretical improvement: on average the new algorithm solves instances 15–20 % faster than the MIS‑enumeration baseline, and the speed‑up becomes more pronounced on sparse graphs where low‑degree reductions are frequently applicable.
Finally, the authors discuss broader implications and future work. The reduction rules are largely generic and could be adapted to related problems such as Minimum Dominating Set, Minimum Independent Set, or even to parameterized algorithms where the solution size k is the parameter. Moreover, the measure‑and‑conquer technique demonstrated here can be refined by introducing additional structural parameters (e.g., clustering coefficient) or by integrating memoization and pruning strategies from exact exponential‑time algorithms for SAT or CSPs.
In summary, the paper makes three principal contributions: (1) a novel branch‑and‑reduce algorithm for MIDS with a provably improved exponential running time of O(1.3569ⁿ); (2) a rigorous analysis that yields a near‑matching lower bound of Ω(1.3247ⁿ), showing that the analysis is essentially optimal; and (3) an experimental validation that the theoretical gains translate into practical performance improvements. This work therefore advances the state of the art for exact exponential‑time algorithms on a classic graph‑theoretic optimization problem.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...