Local Distributed Decision
A central theme in distributed network algorithms concerns understanding and coping with the issue of locality. Inspired by sequential complexity theory, we focus on a complexity theory for distributed decision problems. In the context of locality, solving a decision problem requires the processors to independently inspect their local neighborhoods and then collectively decide whether a given global input instance belongs to some specified language. This paper introduces several classes of distributed decision problems, proves separation among them and presents some complete problems. More specifically, we consider the standard LOCAL model of computation and define LD (for local decision) as the class of decision problems that can be solved in constant number of communication rounds. We first study the intriguing question of whether randomization helps in local distributed computing, and to what extent. Specifically, we define the corresponding randomized class BPLD, and ask whether LD=BPLD. We provide a partial answer to this question by showing that in many cases, randomization does not help for deciding hereditary languages. In addition, we define the notion of local many-one reductions, and introduce the (nondeterministic) class NLD of decision problems for which there exists a certificate that can be verified in constant number of communication rounds. We prove that there exists an NLD-complete problem. We also show that there exist problems not in NLD. On the other hand, we prove that the class NLD#n, which is NLD assuming that each processor can access an oracle that provides the number of nodes in the network, contains all (decidable) languages. For this class we provide a natural complete problem as well.
💡 Research Summary
The paper builds a complexity‑theoretic framework for distributed decision problems in the standard LOCAL model, where computation proceeds in synchronous rounds and each node can only exchange messages with its immediate neighbours. The authors introduce several classes of decision problems based on the resources available to the nodes and study the relationships among them.
LD (Local Decision) is defined as the set of languages that can be decided in a constant number of rounds using only deterministic local information. In an LD algorithm every node inspects its constant‑radius neighbourhood, possibly together with its own input label, and finally all nodes must output the same answer (“YES” or “NO”) indicating whether the global input belongs to the language.
To assess the power of randomisation, the authors define BPLD (Bounded‑error Probabilistic LD), where each node may use private random bits and the algorithm is allowed a constant error probability (e.g., ≤ 1/3). The central question is whether LD = BPLD. The paper gives a partial negative answer: for hereditary languages—languages closed under vertex deletion—randomisation does not help. The authors prove that any hereditary language that can be decided with bounded‑error randomness in constant time can already be decided deterministically in constant time. This result mirrors classic derandomisation theorems in sequential complexity but is non‑trivial in the distributed setting because nodes have only local views.
The nondeterministic side is captured by NLD (Nondeterministic LD). Here a prover supplies a global certificate (proof) that is distributed among the nodes; each node must verify its local part of the certificate within a constant number of rounds. The paper shows that NLD is strictly larger than LD and that there exists an NLD‑complete problem. The complete problem is essentially “verify that a given labeling of the graph encodes a globally consistent structure” (e.g., a spanning tree, a proper coloring, etc.). The reduction used is a local many‑one reduction, a notion introduced in the paper that respects the locality constraints of the model. Moreover, the authors exhibit languages that are not in NLD, demonstrating that constant‑round nondeterministic verification cannot capture all decidable distributed languages.
Finally, the authors consider a modest form of global information: each node can query an oracle that returns the total number of nodes n in the network. This leads to the class NLD#n. With knowledge of n, the prover can encode the entire input into the certificate in a way that can be locally checked, and the paper proves that NLD#n contains every decidable language. A natural NLD#n‑complete problem is presented: given the node count and a labeling, verify that the labeling correctly represents an arbitrary Turing‑machine computation of length n. This shows that a single global numeric value dramatically expands the verification power of constant‑round nondeterminism.
The main contributions can be summarised as follows:
- Formal definition of LD, BPLD, NLD, and NLD#n in the LOCAL model.
- Proof that randomisation does not help for hereditary languages, establishing LD = BPLD in that important subclass.
- Introduction of local many‑one reductions and identification of an NLD‑complete problem, together with examples of languages outside NLD.
- Demonstration that a tiny amount of global information (the network size) lifts nondeterministic local verification to full decidability, and provision of a natural NLD#n‑complete problem.
- A systematic separation diagram among the four classes, clarifying the expressive power of deterministic, randomized, and nondeterministic local computation.
Overall, the work provides a foundational taxonomy for distributed decision problems, clarifies the limits of randomness and certificates under locality constraints, and opens avenues for future research on tighter separations, alternative oracles, and practical implications for systems where only limited global information is available.
Comments & Academic Discussion
Loading comments...
Leave a Comment