Computing Least Fixed Points with Overwrite Semantics in Parallel and Distributed Systems
We present methods to compute least fixed points of multiple monotone inflationary functions in parallel and distributed settings. While the classic Knaster-Tarski theorem addresses a single function with sequential iteration, modern computing systems require parallel execution with overwrite semantics, non-atomic updates, and stale reads. We prove three convergence theorems under progressively relaxed synchronization: (1) Interleaving semantics with fair scheduling, (2) Parallel execution with update-only-on-change semantics (processes write only on those coordinates whose values change), and (3) Distributed execution with bounded staleness (updates propagate within $T$ rounds) and $i$-locality (each process modifies only its own component). Our approach differs from prior work in fundamental ways: Cousot-Cousot’s chaotic iteration uses join-based merges that preserve information. Instead, we use coordinate-wise overwriting. Bertsekas’s asynchronous methods assume contractions. We use coordinate-wise overwriting with structural constraints (locality, bounded staleness) instead. Applications include parallel and distributed algorithms for the transitive closure, stable marriage, shortest paths, and fair division with subsidy problems. Our results provide the first exact least-fixed-point convergence guarantees for overwrite-based parallel updates without join operations or contraction assumptions.
💡 Research Summary
The paper tackles the problem of computing the least common fixed point of several monotone inflationary functions in modern parallel and distributed environments, where updates are performed by overwriting individual coordinates rather than by merging via lattice joins. Classical fixed‑point theorems such as Knaster‑Tarski or Kleene address a single monotone operator and assume a sequential iteration from the bottom element ⊥. Those results do not capture the realities of today’s systems: multiple workers operate concurrently, each may read stale data, and writes are destructive (last‑writer‑wins). The authors therefore develop three convergence theorems that progressively relax synchronization assumptions while still guaranteeing convergence to the exact least fixed point.
-
Interleaving semantics with fair scheduling – The first theorem assumes a fair scheduler that repeatedly selects one of the functions f_i and applies it atomically to the current global state. Each f_i is monotone and inflationary, the lattice L is finite, and the computation starts from ⊥. Under these mild conditions the sequence of states forms an ascending chain that must stabilize, and the limit is shown to be the least common fixed point of the whole family. The proof mirrors the classic argument for a single function but extends it to an arbitrary interleaving of many functions.
-
Parallel execution with update‑only‑on‑change – In realistic parallel systems, many functions are evaluated simultaneously. The authors observe that naïve “read‑compute‑write” with full‑vector writes can break convergence (a simple two‑variable counterexample is given). To avoid this, they impose two constraints: (a) the underlying lattice must be distributive, and (b) each process writes only the coordinates whose values actually change (the update‑only‑on‑change model). Because no write can decrease a coordinate, the global state still evolves monotonically despite overlapping writes. The distributivity ensures that coordinate‑wise overwrites preserve the lattice order. Consequently, any fair execution under these constraints converges to the same least common fixed point.
-
Distributed execution with bounded staleness and i‑locality – The most general setting models a truly distributed system where each node i stores only its own component G
Comments & Academic Discussion
Loading comments...
Leave a Comment