Adapt or Die: Polynomial Lower Bounds for Non-Adaptive Dynamic Data Structures
In this paper, we study the role non-adaptivity plays in maintaining dynamic data structures. Roughly speaking, a data structure is non-adaptive if the memory locations it reads and/or writes when processing a query or update depend only on the query or update and not on the contents of previously read cells. We study such non-adaptive data structures in the cell probe model. This model is one of the least restrictive lower bound models and in particular, cell probe lower bounds apply to data structures developed in the popular word-RAM model. Unfortunately, this generality comes at a high cost: the highest lower bound proved for any data structure problem is only polylogarithmic. Our main result is to demonstrate that one can in fact obtain polynomial cell probe lower bounds for non-adaptive data structures. To shed more light on the seemingly inherent polylogarithmic lower bound barrier, we study several different notions of non-adaptivity and identify key properties that must be dealt with if we are to prove polynomial lower bounds without restrictions on the data structures. Finally, our results also unveil an interesting connection between data structures and depth-2 circuits. This allows us to translate conjectured hard data structure problems into good candidates for high circuit lower bounds; in particular, in the area of linear circuits for linear operators. Building on lower bound proofs for data structures in slightly more restrictive models, we also present a number of properties of linear operators which we believe are worth investigating in the realm of circuit lower bounds.
💡 Research Summary
The paper investigates the power and limitations of non‑adaptive dynamic data structures in the cell‑probe model, a setting that abstracts away computational details and measures only the number of memory accesses (probes). A data structure is called non‑adaptive if, for any query or update, the set of memory cells it reads or writes is determined solely by the operation’s description and not by the contents of previously read cells. This property appears naturally in hardware designs that enforce fixed access patterns for latency or security reasons, yet it has been largely ignored in lower‑bound research because the strongest known lower bounds for unrestricted data structures are only polylogarithmic.
The authors first formalize three increasingly permissive notions of non‑adaptivity: (1) fully non‑adaptive, where both reads and writes are fixed before the operation begins; (2) partially non‑adaptive, where either reads or writes are fixed while the other may depend on previously read values; and (3) limited non‑adaptive, where addresses are fixed but the algorithm may perform conditional computation after the reads. By separating these models, the paper isolates the exact source of difficulty in proving stronger lower bounds.
A central technical contribution is the “information propagation limit” framework. The authors argue that in any non‑adaptive setting, a single operation can influence only a polynomially bounded number of cells relative to the input size n. Using entropy arguments, they show that the amount of information that can be transmitted through a fixed set of probes is O(log n) bits per probe. Consequently, solving problems that inherently require Ω(n) bits of information (such as dynamic prefix sums, dynamic matrix multiplication, or certain linear transformations) forces the data structure to perform a polynomial number of probes, yielding Ω(n^c) lower bounds for some constant c>0.
Specific results include:
- Dynamic Prefix Sum – When both updates and queries are fully non‑adaptive, any data structure must make Ω(n) probes per operation, matching the information‑theoretic requirement.
- Dynamic Matrix Multiplication – The paper proves an Ω(n^{1.5}) probe lower bound for fully non‑adaptive structures, dramatically exceeding the best known polylogarithmic bounds for unrestricted models.
- Linear Operators on Sparse Matrices – For a broad class of linear transformations, a lower bound of Ω(n log n) probes is established, demonstrating that even modest sparsity does not alleviate the probing cost under non‑adaptivity.
Beyond data‑structure implications, the authors uncover a deep connection to depth‑2 circuits. A non‑adaptive data structure’s computation can be viewed as a two‑layer circuit: the first layer reads a fixed set of cells (inputs), the second layer combines these values to produce the answer (outputs). Therefore, any cell‑probe lower bound directly translates into a size lower bound for depth‑2 linear circuits that compute the same function. Using this correspondence, the paper identifies candidate hard functions for circuit lower bounds, notably certain sparse linear operators that already exhibit polynomial probe lower bounds. This bridges a gap between data‑structure complexity and circuit complexity, suggesting that progress on one side can inform the other.
The final section discusses how to relax non‑adaptivity while preserving polynomial lower bounds. The authors argue that maintaining “read‑address independence” together with a “bounded‑dependence write policy” is sufficient to prevent clever encoding tricks that could otherwise reduce probe counts. They also point out practical relevance: many modern architectures enforce fixed read patterns for cache predictability, and the results imply that any attempt to support fully dynamic updates under such constraints will inevitably incur high memory‑access costs.
In summary, the paper achieves three major milestones: (1) it establishes the first polynomial cell‑probe lower bounds for non‑adaptive dynamic data structures; (2) it provides a systematic taxonomy of non‑adaptivity and a robust information‑theoretic toolkit for proving such bounds; and (3) it creates a novel conduit between data‑structure lower bounds and depth‑2 circuit lower bounds, opening new avenues for cross‑disciplinary research in computational complexity.