MixQuant: Pushing the Limits of Block Rotations in Post-Training Quantization
Recent post-training quantization (PTQ) methods have adopted block rotations to diffuse outliers prior to rounding. While this reduces the overhead of full-vector rotations, the effect of block structure on outlier suppression remains poorly understood. To fill this gap, we present the first systematic, non-asymptotic analysis of outlier suppression for block Hadamard rotations. Our analysis reveals that outlier suppression is fundamentally limited by the geometry of the input vector. In particular, post-rotation outliers are deterministically minimized when the pre-rotation $\ell_1$ norm mass is evenly distributed across blocks. Guided by these insights, we introduce MixQuant, a block rotation-aware PTQ framework that redistributes activation mass via permutations prior to rotation. We propose a greedy mass diffusion algorithm to calibrate permutations by equalizing the expected blockwise $\ell_1$ norms. To avoid adding inference overhead, we identify permutation-equivariant regions in transformer architectures to merge the resulting permutations into model weights before deployment. Experiments show that MixQuant consistently improves accuracy across all block sizes, recovering up to 90% of the full-vector rotation perplexity when quantizing Llama3 1B to INT4 with block size 16, compared to 46% without permutations.
💡 Research Summary
The paper addresses a key bottleneck in post‑training quantization (PTQ) of large language models (LLMs): activation outliers that severely degrade accuracy when models are quantized to very low bit‑widths (e.g., INT4). Recent PTQ methods mitigate this problem by inserting orthogonal rotations—most commonly Hadamard transforms—before rounding, thereby diffusing large values across vector coordinates. While full‑vector Hadamard rotations are effective, they often have to be executed online, incurring a non‑trivial latency overhead (7‑15 % for Llama‑2 7B, as reported by prior work). To reduce this cost, practitioners have turned to block‑wise rotations, applying independent Hadamard transforms to fixed‑size partitions of the activation vector. This reduces the computational complexity from O(d log d) to O(d log b) for block size b, but at the price of weaker outlier suppression, especially for small b.
Theoretical Contributions
The authors provide the first systematic, non‑asymptotic analysis of outlier suppression for both full‑vector and block Hadamard rotations. They introduce a geometric quantity
\
Comments & Academic Discussion
Loading comments...
Leave a Comment