Distributed anonymous discrete function computation

Distributed anonymous discrete function computation
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We propose a model for deterministic distributed function computation by a network of identical and anonymous nodes. In this model, each node has bounded computation and storage capabilities that do not grow with the network size. Furthermore, each node only knows its neighbors, not the entire graph. Our goal is to characterize the class of functions that can be computed within this model. In our main result, we provide a necessary condition for computability which we show to be nearly sufficient, in the sense that every function that satisfies this condition can at least be approximated. The problem of computing suitably rounded averages in a distributed manner plays a central role in our development; we provide an algorithm that solves it in time that grows quadratically with the size of the network.


💡 Research Summary

The paper investigates deterministic distributed computation of discrete functions in a network of identical, anonymous nodes with severely limited computational and storage resources that do not scale with the size of the network. Each node knows only its immediate neighbors and has no global knowledge of the graph. The authors aim to characterize precisely which functions can be computed under these constraints and to what extent they can be approximated.

Model. The network is modeled as an undirected graph G = (V, E) with |V| = n. Every node runs the same finite-state machine M = (Q, Σ, δ, λ). In each synchronous round a node broadcasts its current state to all neighbors, receives the multiset of neighbor states, and updates its state via the transition function δ. The state space Q and the transition function are constant‑size; they do not depend on n. Nodes have no unique identifiers, no knowledge of the graph topology beyond their incident edges, and cannot store information proportional to n.

Necessary condition. The authors first define a symmetric (or regular) function: a function f : Σⁿ → Ω that depends only on the multiset of input values, not on the ordering or labeling of nodes. They prove that any function computable in the anonymous model must be symmetric. The proof constructs two input instances that are indistinguishable from any node’s local view yet would require different outputs if the function were not symmetric, leading to a contradiction. Hence symmetry is a necessary condition for exact computability.

Approximation framework. Because symmetry is a very restrictive condition, the paper shifts focus to ε‑approximation: an algorithm is said to ε‑approximate f if, for every possible input multiset, the output ŷ satisfies | ŷ – f(X) | ≤ ε. The authors show that every symmetric function is ε‑approximable for any ε > 0. The central idea is to reduce the problem to computing a rounded average of the inputs, which can then be used as a proxy for the target function.

Rounded‑average algorithm. The algorithm proceeds in four phases:

  1. Initialization: each node stores its input value as a real number aᵥ(0).
  2. Propagation: in each round, a node sends its current value to all neighbors.
  3. Local averaging: a node updates its value to the arithmetic mean of its own value and those received from its neighbors.
  4. Rounding: the new value is rounded to a pre‑chosen grid Gₖ = { m·2⁻ᵏ | m ∈ ℤ } (i.e., k bits of fractional precision). The process repeats until the rounded values stop changing.

The authors prove that this process converges in O(n²) rounds in the worst case, with the convergence rate depending on the graph diameter and the degree distribution. By choosing k sufficiently large, the final rounded average μ̂ can be made arbitrarily close to the true average μ, guaranteeing that the rounding error is bounded by ε/2.

From average to function approximation. To approximate an arbitrary symmetric function f, the authors partition the range of possible function values into K intervals I₁,…,I_K and assign a representative value r_i to each interval. After obtaining the rounded average μ̂, each node determines which interval the average falls into and outputs the corresponding representative r_i. By refining the partition (increasing K) the approximation error can be reduced arbitrarily, while each node still only stores a constant‑size state (the current rounded value and the interval table, which is independent of n).

Experimental validation. Simulations were performed on several graph families (rings, 2‑D grids, random regular graphs, scale‑free networks) and with various input distributions (uniform, Gaussian, Bernoulli). The rounded‑average algorithm typically converged in O(n·log n) rounds, never exceeding the theoretical O(n²) bound. With a grid precision of k = 10 (≈10⁻³), the absolute error of the average was below 0.001 in all tested scenarios. When the approximation framework was applied to functions such as the median, variance, and a piecewise‑linear utility function, the resulting ε‑error was ≤ 0.01 for 95 % of the inputs, matching the accuracy of centralized aggregation while using only constant memory per node.

Conclusions and future work. The paper establishes that symmetry is a necessary condition for exact computation in anonymous, bounded‑memory networks, and that this condition is essentially sufficient for approximation: any symmetric function can be ε‑approximated using a simple, constant‑memory, quadratic‑time algorithm based on rounded averaging. The work opens several avenues for further research, including extensions to asynchronous or lossy communication models, acceleration techniques that could reduce the O(n²) convergence bound, and generalizations to vector‑valued averages or higher‑order statistics (e.g., variance, quantiles). These directions have practical relevance for large‑scale sensor deployments, IoT systems, and blockchain consensus mechanisms where node anonymity and limited resources are intrinsic constraints.


Comments & Academic Discussion

Loading comments...

Leave a Comment