On the Use of Randomness in Local Distributed Graph Algorithms

On the Use of Randomness in Local Distributed Graph Algorithms
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We attempt to better understand randomization in local distributed graph algorithms by exploring how randomness is used and what we can gain from it: - We first ask the question of how much randomness is needed to obtain efficient randomized algorithms. We show that for all locally checkable problems for which polylog $n$-time randomized algorithms exist, there are such algorithms even if either (I) there is a only a single (private) independent random bit in each polylog $n$-neighborhood of the graph, (II) the (private) bits of randomness of different nodes are only polylog $n$-wise independent, or (III) there are only polylog $n$ bits of global shared randomness (and no private randomness). - Second, we study how much we can improve the error probability of randomized algorithms. For all locally checkable problems for which polylog $n$-time randomized algorithms exist, we show that there are such algorithms that succeed with probability $1-n^{-2^{\varepsilon(\log\log n)^2}}$ and more generally $T$-round algorithms, for $T\geq$ polylog $n$, that succeed with probability $1-n^{-2^{\varepsilon\log^2T}}$. We also show that polylog $n$-time randomized algorithms with success probability $1-2^{-2^{\log^\varepsilon n}}$ for some $\varepsilon>0$ can be derandomized to polylog $n$-time deterministic algorithms. Both of the directions mentioned above, reducing the amount of randomness and improving the success probability, can be seen as partial derandomization of existing randomized algorithms. In all the above cases, we also show that any significant improvement of our results would lead to a major breakthrough, as it would imply significantly more efficient deterministic distributed algorithms for a wide class of problems.


💡 Research Summary

The paper investigates two fundamental aspects of randomness in local distributed graph algorithms: how much randomness is actually required for efficient algorithms, and how much the success probability can be amplified. The authors focus on locally checkable problems (LCL) that admit polylogarithmic‑time randomized algorithms in the LOCAL model, and they show that the gap between randomized (P‑RLOCAL) and deterministic (P‑LOCAL) complexity can be largely explained by the amount and quality of randomness used.

First, they prove that for any LCL problem solvable in polylog n rounds with randomness, the same time bound can be achieved under dramatically weaker randomness assumptions. Three distinct models are considered:

  1. Single‑bit per polylog‑neighborhood – Each node possesses only one independent random bit somewhere within its O(log n) hop neighborhood. By carefully propagating and reusing this bit, the authors construct algorithms (e.g., for network decomposition, MIS, Δ‑coloring) that run in O(log n) rounds with success probability 1 − 1/poly(n). This shows that a single bit of “local” randomness per polylog radius is sufficient.

  2. Polylog‑wise independence – The random bits of different nodes need not be fully independent; it suffices that they are O(log n)‑wise independent. Using standard k‑wise independent generators, the authors demonstrate that all polylog‑time randomized LCL algorithms continue to work unchanged, implying that full independence is unnecessary.

  3. O(log n) global shared bits – Even if the entire network shares only O(log n) random bits (no private randomness), the same class of algorithms can be simulated. They give an explicit CONGEST‑model algorithm that computes a (polylog n, polylog n) network decomposition with probability 1 − 1/poly(n) using only these shared bits.

Second, the paper addresses error reduction. By running a base algorithm multiple times independently and applying a voting or hash‑based verification scheme, they boost the success probability dramatically. For any T‑round algorithm (T ≥ polylog n), the boosted algorithm succeeds with probability at least 1 − n^{‑2^{ε·log²T}}. In particular, for T = polylog n the error becomes n^{‑2^{ε·(log log n)²}}, essentially negligible. Moreover, they prove a partial derandomization result: any polylog‑time algorithm whose success probability exceeds 1 − 2^{‑2^{ε·log n}} can be deterministically simulated in polylog n rounds. This yields deterministic algorithms with round complexity O(log n), improving over the best known deterministic bounds of 2^{O(√log n)} for many LCL problems.

The authors also discuss the implications of improving any of these parameters. Strengthening the randomness‑reduction results or achieving even higher success probabilities would imply breakthroughs such as P‑RLOCAL = P‑LOCAL or P‑SLOCAL = P‑LOCAL, which would immediately give O(log n) deterministic algorithms for classic open problems like MIS and Δ‑coloring.

In summary, the paper shows that (i) a tiny amount of randomness—one bit per polylog neighborhood, polylog‑wise independence, or O(log n) shared bits—is sufficient for all known polylog‑time randomized LCL algorithms, and (ii) the error probability can be pushed arbitrarily close to 1 with only polylogarithmic overhead, with a threshold beyond which full derandomization is possible. These findings tighten the known relationship between randomized and deterministic distributed computation and set clear limits on how much further the gap can be closed without resolving major open questions in the field.


Comments & Academic Discussion

Loading comments...

Leave a Comment