MixKVQ: Query-Aware Mixed-Precision KV Cache Quantization for Long-Context Reasoning

Reading time: 2 minute
...

📝 Original Info

  • Title: MixKVQ: Query-Aware Mixed-Precision KV Cache Quantization for Long-Context Reasoning
  • ArXiv ID: 2512.19206
  • Date: 2025-12-22
  • Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. **

📝 Abstract

Long Chain-of-Thought (CoT) reasoning has significantly advanced the capabilities of Large Language Models (LLMs), but this progress is accompanied by substantial memory and latency overhead from the extensive Key-Value (KV) cache. Although KV cache quantization is a promising compression technique, existing low-bit quantization methods often exhibit severe performance degradation on complex reasoning tasks. Fixed-precision quantization struggles to handle outlier channels in the key cache, while current mixed-precision strategies fail to accurately identify components requiring high-precision representation. We find that an effective low-bit KV cache quantization strategy must consider two factors: a key channel's intrinsic quantization difficulty and its relevance to the query. Based on this insight, we propose MixKVQ, a novel plug-and-play method that introduces a lightweight, query-aware algorithm to identify and preserve critical key channels that need higher precision, while applying per-token quantization for value cache. Experiments on complex reasoning datasets demonstrate that our approach significantly outperforms existing low-bit methods, achieving performance comparable to a full-precision baseline at a substantially reduced memory footprint.

💡 Deep Analysis

Figure 1

📄 Full Content

📸 Image Gallery

Key_Error_Highlights_Heatmap.png Layer_0_Head_2_KV_Error_Heatmap.png efficiency_tradeoff.png method.png pareto.png quant_analysis_layer_0_combined_v3.png reason_drop.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut