📝 Original Info
- Title: A Knowledge-Based Analysis of Global Function Computation
- ArXiv ID: 0707.3435
- Date: 2007-08-08
- Authors: ** Joseph Y. Halpern (Cornell University) Sabina Petride (Cornell University) **
📝 Abstract
Consider a distributed system N in which each agent has an input value and each communication link has a weight. Given a global function, that is, a function f whose value depends on the whole network, the goal is for every agent to eventually compute the value f(N). We call this problem global function computation. Various solutions for instances of this problem, such as Boolean function computation, leader election, (minimum) spanning tree construction, and network determination, have been proposed, each under particular assumptions about what processors know about the system and how this knowledge can be acquired. We give a necessary and sufficient condition for the problem to be solvable that generalizes a number of well-known results. We then provide a knowledge-based (kb) program (like those of Fagin, Halpern, Moses, and Vardi) that solves global function computation whenever possible. Finally, we improve the message overhead inherent in our initial kb program by giving a counterfactual belief-based program that also solves the global function computation whenever possible, but where agents send messages only when they believe it is necessary to do so. The latter program is shown to be implemented by a number of well-known algorithms for solving leader election.
💡 Deep Analysis
Deep Dive into A Knowledge-Based Analysis of Global Function Computation.
Consider a distributed system N in which each agent has an input value and each communication link has a weight. Given a global function, that is, a function f whose value depends on the whole network, the goal is for every agent to eventually compute the value f(N). We call this problem global function computation. Various solutions for instances of this problem, such as Boolean function computation, leader election, (minimum) spanning tree construction, and network determination, have been proposed, each under particular assumptions about what processors know about the system and how this knowledge can be acquired. We give a necessary and sufficient condition for the problem to be solvable that generalizes a number of well-known results. We then provide a knowledge-based (kb) program (like those of Fagin, Halpern, Moses, and Vardi) that solves global function computation whenever possible. Finally, we improve the message overhead inherent in our initial kb program by giving a counter
📄 Full Content
arXiv:0707.3435v1 [cs.DC] 23 Jul 2007
A Knowledge-Based Analysis of Global Function Computation∗
Joseph Y. Halpern
Cornell University
Ithaca, NY 14853
halpern@cs.cornell.edu
Sabina Petride
Cornell University
Ithaca, NY 14853
petride@cs.cornell.edu
Abstract
Consider a distributed system N in which each agent has an input value and each communi-
cation link has a weight. Given a global function, that is, a function f whose value depends on
the whole network, the goal is for every agent to eventually compute the value f(N). We call
this problem global function computation. Various solutions for instances of this problem, such
as Boolean function computation, leader election, (minimum) spanning tree construction, and net-
work determination, have been proposed, each under particular assumptions about what processors
know about the system and how this knowledge can be acquired. We give a necessary and suf-
ficient condition for the problem to be solvable that generalizes a number of well-known results
[Attyia, Snir, and Warmuth 1988; Yamashita and Kameda 1996; Yamashita and Kameda 1999]. We
then provide a knowledge-based (kb) program (like those of Fagin, Halpern, Moses, and Vardi [1995,
1997]) that solves global function computation whenever possible. Finally, we improve the mes-
sage overhead inherent in our initial kb program by giving a counterfactual belief-based program
[Halpern and Moses 2004] that also solves the global function computation whenever possible, but
where agents send messages only when they believe it is necessary to do so. The latter program is
shown to be implemented by a number of well-known algorithms for solving leader election.
1
Introduction
Consider a distributed system N in which each agent has an input value and each communication link
has a weight. Given a global function, that is, a function f whose value depends on the whole network,
the goal is for every agent to eventually compute the value f(N). We call this problem global function
computation. Many distributed protocols involve computing some global function of the network. This
problem is typically straightforward if the network is known. For example, if the goal is to compute
the spanning tree of the network, one can simply apply one of the well-known algorithms proposed
by Kruskal or Prim. However, in a distributed setting, agents may have only local information, which
makes the problem more difficult. For example, the algorithm proposed by Gallager, Humblet and Spira
[1983] is known for its complexity.1 Moreover, the algorithm does not work for all networks, although
∗Work supported in part by NSF under grants CTC-0208535, ITR-0325453, and IIS-0534064, by ONR under grant
N00014-02-1-0455, by the DoD Multidisciplinary University Research Initiative (MURI) program administered by the ONR
under grants N00014-01-1-0795 and N00014-04-1-0725, and by AFOSR under grants F49620-02-1-0101 and FA9550-05-1-
0055.
1Gallager, Humblet, and Spira’s algorithm does not actually solve the minimum spanning tree as we have defined it, since
agents do not compute the minimum spanning tree, but only learn relevant information about it, such as which of its edges
lead in the direction of the root.
1
it is guaranteed to work correctly when agents have distinct inputs and no two edges have identical
weights.
Computing shortest paths between nodes in a network is another instance of global function com-
putation that has been studied extensively [Ford and Fulkerson 1962; Bellman 1958]. The well-known
leader election problem [Lynch 1997] can also be viewed as an instance of global computation in all
systems where agents have distinct inputs: the leader is the agent with the largest (or smallest) in-
put. The difficulty in solving global function computation depends on what processors know. For
example, when processors know their identifiers (names) and all ids are unique, several solutions for
the leader election problem have been proposed, both in the synchronous and asynchronous settings
[Chang and Roberts 1979; Le Lann 1977; Peterson 1982]. On the other hand, Angluin [1980], and
Johnson and Schneider [1985] proved that it is impossible to deterministically elect a leader if agents
may share names. In a similar vein, Attiya, Snir and Warmuth [1988] prove that there is no deterministic
algorithm that computes a non-constant Boolean global function in a ring of unknown and arbitrarily
large size if agents’ names are not necessarily unique. Attiya, Gorbach, and Moran [2002] characterize
what can be computed in what they call totally anonymous shared memory systems, where access to
shared memory is anonymous.
We aim to better understand what agents need to know to compute a global function. We do this
using the framework of knowledge-based (kb) programs, proposed by Fagin, Halpern, Moses and Vardi
[1995, 1997]. Intuitively, in a kb program, an agent’s actions may depend on his knowledge. To say
that the agent with identity i knows some fact ϕ we simply
…(Full text truncated)…
Reference
This content is AI-processed based on ArXiv data.