Local Computation: Lower and Upper Bounds

Reading time: 5 minute
...

📝 Original Info

  • Title: Local Computation: Lower and Upper Bounds
  • ArXiv ID: 1011.5470
  • Date: 2000-01-01
  • Authors: David Peleg —

📝 Abstract

The question of what can be computed, and how efficiently, are at the core of computer science. Not surprisingly, in distributed systems and networking research, an equally fundamental question is what can be computed in a \emph{distributed} fashion. More precisely, if nodes of a network must base their decision on information in their local neighborhood only, how well can they compute or approximate a global (optimization) problem? In this paper we give the first poly-logarithmic lower bound on such local computation for (optimization) problems including minimum vertex cover, minimum (connected) dominating set, maximum matching, maximal independent set, and maximal matching. In addition we present a new distributed algorithm for solving general covering and packing linear programs. For some problems this algorithm is tight with the lower bounds, for others it is a distributed approximation scheme. Together, our lower and upper bounds establish the local computability and approximability of a large class of problems, characterizing how much local information is required to solve these tasks.

💡 Deep Analysis

Figure 1

📄 Full Content

Many of the most fascinating systems in the world are large and complex networks, such as the human society, the Internet, or the brain. Such systems have in common that they are composed of a multiplicity of individual entities, so-called nodes; human beings in society, hosts in the Internet, or neurons in the brain. Each individual node can directly communicate only to a small number of neighboring nodes. For instance, most human communication is between acquaintances or within the family, and neurons are directly linked with merely a relatively small number of other neurons. On the other hand, in spite of each node being inherently "near-sighted," i.e., restricted to local communication, the entirety of the system is supposed to work towards some kind of global goal, solution, or equilibrium.

In this work we investigate the possibilities and limitations of local computation, i.e., to what degree local information is sufficient to solve global tasks. Many tasks can be |V | = n, and a parameter k (k might depend on n or some other property of G). At each node v ∈ V there is an independent agent (for simplicity, we identify the agent at node v with v as well). Every node v ∈ V has a unique identifier id(v) 1 and possibly some additional input. We assume that each node v ∈ V can learn the complete neighborhood Γ k (v) up to distance k in G (see below for a formal definition of Γ k (v)). Based on this information, all nodes need to make independent computations and need to individually decide on their outputs without communicating with each other. Hence, the output of each node v ∈ V can be computed as a function of it’s k-neighborhood Γ k (v).

Synchronous Message Passing Model: The described graph-theoretic local computation model is equivalent to the classic message passing model of distributed computing. In this model, the distributed system is modeled as a point-to-point communication network, described by an undirected graph G = (V, E), in which each vertex v ∈ V represents a node (host, device, processor, . . . ) of the network, and an edge (u, v) ∈ E is a bidirectional communication channel that connects the two nodes. Initially, nodes have no knowledge about the network graph; they only know their own identifier and potential additional inputs. All nodes wake up simultaneously and computation proceeds in synchronous rounds. In each round, every node can send one, arbitrarily long message to each of its neighbors. Since we consider point-to-point networks, a node may send different messages to different neighbors in the same round. Additionally, every node is allowed to perform local computations based on information obtained in messages of previous rounds. Communication is reliable, i.e., every message that is sent during a communication round is correctly received by the end of the round. An algorithm’s time complexity is defined as the number of communication rounds until all nodes terminate. 2 The above is a standard model of distributed computing and is generally known as the LOCAL model [46,37]. It is the strongest possible model when studying the impact of locally-restricted knowledge on computability, because it focuses entirely on the locality of distributed problems and abstracts away other issues arising in the design of distributed algorithms (e.g., need for small messages, fast local computations, congestion, asynchrony, packet loss, etc.). It is thus the most fundamental model for proving lower bounds on local computation [37]; because any lower bound is a true consequence of locality restrictions.

Equivalence of Time Complexity and Neighborhood-Information: There is a one-to-one correspondence between the time complexity of distributed algorithms in the LO-CAL model and the graph theoretic notion of neighborhood-information. In particular, a distributed algorithm with time-complexity k (i.e., in which each node performs k communication rounds) is equivalent to a scenario in which distributed decision makers at the nodes of a graph must base their decision on (complete) knowledge about their k-hop neighborhood Γ k (v) only. This is true because with unlimited sized messages, every node v ∈ V can easily collect all IDs and interconnections of all nodes in its k-hop neighborhood in k communication rounds. On the other hand, a node v clearly cannot obtain any information from a node at distance k + 1 or further away, because this information would require more than k rounds to reach v. Thus, the LOCAL model relates distributed computation to the algorithmic theory of the value of information as studied for example in [44]: the question of how much local knowledge is required for distributed decision makers to solve a global 1 All our results hold for any possible ID space including the standard case where IDs are the numbers 1, . . . , n.

2 Notice that this synchronous message passing model captures many practical systems, including for example, Google’s Pregel system, a practically imp

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut