$π$-Attention: Periodic Sparse Transformers for Efficient Long-Context Modeling

Transformers have revolutionized natural language processing, but their quadratic complexity with respect to sequence length remains a fundamental bottleneck for long-range modeling. While sparse atte

$π$-Attention: Periodic Sparse Transformers for Efficient Long-Context Modeling

Transformers have revolutionized natural language processing, but their quadratic complexity with respect to sequence length remains a fundamental bottleneck for long-range modeling. While sparse attention mechanisms like RingAttention reduce computational costs by restricting attention to local neighborhoods, they suffer from limited receptive fields and lack of adaptability. We present π-Attention, a periodic sparse Transformer that factorizes attention into ring-local neighborhoods, deterministic π-stride skips, and an adaptive fusion gate. The periodic structure provides predictable coverage of distant tokens, while the sparse footprint keeps the per-layer complexity linear in context length. We prove that π-Attention achieves O(kL + π log L) receptive field growth compared to O(kL) for RingAttention, where k is the local window size, π is the skip period, and L is the sequence length. Extensive experiments on language modeling, retrieval, and vision-language tasks demonstrate that π-Attention matches or surpasses dense attention quality with 8.3% lower perplexity than RingAttention while using 50% fewer GPUs for the same context length. Our detailed ablations and visualizations reveal the importance of periodic skips, adaptive fusion, and head-level sparsity coordination for efficient long-context modeling.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...