Near-Oracle KV Selection via Pre-hoc Sparsity for Long-Context Inference

Near-Oracle KV Selection via Pre-hoc Sparsity for Long-Context Inference
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

A core bottleneck in large language model (LLM) inference is the cost of attending over the ever-growing key-value (KV) cache. Although near-oracle top-k KV selection can preserve the quality of dense attention while sharply reducing computation and bandwidth, existing sparse methods generally rely on posterior heuristics, i.e., selectors conditioned on observed attention or proxy scores. Such conditioning introduces posterior bias: it tends to distort true token importance and miss salient tokens, thereby impairing long-range reasoning. To tackle this problem, we propose Pre-hoc Sparsity (PrHS), which selects KV entries before attention scoring and provides explicit accuracy control. Let the attention mass of discarded entries be delta (the dropped mass). Through a marginal-to-mutual-information analysis, we derive an upper bound on the mutual-information loss that depends only on the dropped mass. This relation explains failure modes of posterior heuristics and enables verifiable guarantees by controlling the dropped mass in advance. Within PrHS, we instantiate three orthogonal pre-hoc selectors along the axes of time, depth, and layer. Extensive experiments on LLaMA and Mistral families validate PrHS. Across GSM8K and CoQA, PrHS reduces retrieval overhead by over 90%, achieving 3x higher retrieval sparsity than HShare at matched or better accuracy. It incurs under 1% average degradation on LongBench, lowers attention FLOPs by about 15% versus prior sparse baselines, and yields a 9.9x speedup in attention-operator latency and 2.8x higher throughput on NVIDIA A100-80GB GPUs than the dense baseline.


💡 Research Summary

The paper tackles the dominant bottleneck in long‑context inference for large language models (LLMs): the linear growth of the key‑value (KV) cache and the resulting O(H L d) per‑step attention cost. While the “top‑k oracle” (selecting the k highest‑scoring KV entries for each query) offers the best accuracy‑efficiency trade‑off, it is impractical because computing all attention scores still requires a full O(H L d) pass. Existing sparse methods therefore rely on posterior heuristics—rules that condition on already observed attention statistics, token ages, or low‑dimensional sketches. The authors term this class Posterior‑conditioned Sparsity (PoHS) and demonstrate that it introduces a systematic “posterior bias”: the surrogate scores deviate from true attention, causing the selected set to retain less probability mass and to drop salient tokens, especially in long‑range reasoning.

To overcome this, the authors develop a theoretical framework that links the dropped attention mass δ (the total attention probability assigned to KV entries that are discarded) to the mutual‑information (MI) loss between the full‑attention distribution and the sparse distribution. They prove an upper bound I_full – I_S ≤ g(δ), where g(δ)=2


Comments & Academic Discussion

Loading comments...

Leave a Comment