A simple algorithm for random colouring G(n, d/n) using (2+epsilon)d colours
Approximate random k-colouring of a graph G=(V,E) is a very well studied problem in computer science and statistical physics. It amounts to constructing a k-colouring of G which is distributed close to Gibbs distribution, i.e. the uniform distribution over all the k-colourings of G. Here, we deal with the problem when the underlying graph is an instance of Erdos-Renyi random graph G(n,p), where p=d/n and d is fixed. We propose a novel efficient algorithm for approximate random k-colouring with the following properties: given an instance of G(n,d/n) and for any k>(2+\epsilon)d, it returns a k-colouring distributed within total variation distance n^{-Omega(1)} from the Gibbs distribution, with probability 1-n^{-Omega(1)}. What we propose is neither a MCMC algorithm nor some algorithm inspired by the message passing heuristics that were introduced by statistical physicist. Our algorithm is of combinatorial nature. It is based on a rather simple recursion which reduces the random k-colouring of G(n,d/n) to random k-colouring simpler subgraphs first. The lower bound on the number of colours for our algorithm to run in polynomial time is dramatically smaller than the corresponding bounds we have for any previous algorithm.
💡 Research Summary
The paper tackles the problem of generating an approximately uniform random k‑colouring of an Erdős–Rényi random graph G(n, d/n) when the number of colours k exceeds (2 + ε)·d for any fixed ε > 0. Unlike the dominant approaches based on Markov‑chain Monte‑Carlo (MCMC) such as Glauber dynamics, or on statistical‑physics inspired message‑passing (Belief Propagation), the authors present a purely combinatorial, recursive algorithm that works directly on the graph structure.
The central idea is to reduce the colouring of the original graph to the colouring of a sequence of progressively simpler subgraphs obtained by deleting edges that belong to long cycles (length at least (log n)/(9 log d)). The recursion stops when the remaining subgraph G₀ is simple enough (essentially a forest) that a known polynomial‑time algorithm can produce a truly uniform random k‑colouring. The algorithm then reconstructs a colouring of the original graph by repeatedly applying a sub‑routine called STEP to re‑insert the deleted edges.
STEP receives a random k‑colouring of a graph G and two non‑adjacent vertices u, v (the endpoints of a previously deleted edge). If the colouring already assigns different colours to u and v, it is left unchanged. Otherwise, STEP selects a colour q uniformly from the set of colours different from the colour of u, builds the “disagreement graph” Q consisting of vertices coloured either q or the colour of u and that are connected to u, and performs a q‑switching: the colours on the two bipartite sides of Q are swapped. This operation transforms a “bad” colouring (u and v share a colour) into a “good” one (they differ) with high probability, provided k ≥ (2 + ε)d. The key technical ingredient is a spatial mixing bound: for such k, the conditional probability that v takes any colour c given that u has colour q decays exponentially with the graph distance between u and v. Consequently, the distribution of the output of STEP is within total variation distance O(exp(−b·dist(u,v))) of the ideal uniform distribution over colourings where u and v differ.
To formalise the approximation quality, the authors introduce the notion of α‑isomorphism between two sets of colourings. Two sets are α‑isomorphic if large subsets (size at least (1 − α) of each) are exactly bijective. An α‑function implements this bijection on the large subsets while being arbitrary elsewhere. Lemma 1 shows that if the set of bad colourings and the set of good colourings are α‑isomorphic via STEP, then the total variation distance between the distribution produced by STEP and the uniform distribution over good colourings is at most α. By analysing the structure of disagreement graphs in typical G(n, d/n) instances, the authors prove that α = n^{−Ω(1)} for the chosen k, i.e., the error is negligible.
The full algorithm proceeds as follows:
- Starting from G = G_r, repeatedly delete an edge from any cycle longer than (log n)/(9 log d) to obtain G_{r‑1}, …, G₀.
- Colour G₀ uniformly at random using any standard polynomial‑time method (possible because G₀ is a forest‑like graph).
- For i = 0 to r‑1, apply STEP to the current colouring of G_i to obtain a colouring of G_{i+1}. If STEP fails (the q‑switching still yields a bad colouring), the failure probability is bounded by n^{−Ω(1)} and can be ignored in the overall error analysis.
The main results are:
- Theorem 1: For any fixed ε > 0 and k ≥ (2 + ε)d, with probability at least 1 − n^{−ε·90·log d} over the choice of the random graph, the total variation distance between the algorithm’s output distribution μ′ and the true uniform distribution μ over k‑colourings satisfies ‖μ − μ′‖ = O(n^{−ε·90·log d}).
- Theorem 2: Under the same conditions, the algorithm runs in O(n²) time with probability at least 1 − n^{−2/3}.
These theorems demonstrate that the algorithm achieves a near‑perfect approximation to the Gibbs distribution while using only (2 + ε)d colours, a bound dramatically lower than previous polynomial‑time algorithms that required Θ(d log d) or higher. The approach bypasses the “maximum‑degree obstacle” that hampers many mixing‑time analyses, because it never relies on the maximum degree (which in G(n, d/n) grows like Θ(log n/ log log n)). Instead, the algorithm exploits the typical sparsity and expansion properties of random graphs to guarantee rapid spatial mixing.
The paper’s contributions are threefold:
- Improved colour bound – Reducing the required number of colours to just above twice the expected degree, which is essentially optimal for many random‑graph colourability thresholds.
- Novel combinatorial framework – Introducing a recursion based on edge deletions and a deterministic local transformation (q‑switching) that replaces probabilistic chain‑mixing arguments.
- α‑isomorphism methodology – Providing a new analytical tool to quantify how closely a deterministic transformation approximates a uniform distribution, which may be useful for other constraint‑satisfaction problems.
Overall, the work opens a new direction for designing efficient sampling algorithms for sparse random structures, showing that careful combinatorial reductions combined with precise spatial‑mixing estimates can replace heavy probabilistic machinery while achieving comparable or better performance.
Comments & Academic Discussion
Loading comments...
Leave a Comment