Towards Lossless Ultimate Vision Token Compression for VLMs

Reading time: 2 minute
...

📝 Original Info

  • Title: Towards Lossless Ultimate Vision Token Compression for VLMs
  • ArXiv ID: 2512.09010
  • Date: 2025-12-09
  • Authors: Dehua Zheng, Mouxiao Huang, Borui Jiang, Hailin Hu, Xinghao Chen

📝 Abstract

Visual language models encounter challenges in computational efficiency and latency, primarily due to the substantial redundancy in the token representations of highresolution images and videos. Current attention/similaritybased compression algorithms suffer from either position bias or class imbalance, leading to significant accuracy degradation. They also fail to generalize to shallow LLM layers, which exhibit weaker cross-modal interactions. To address this, we extend token compression to the visual encoder through an effective iterative merging scheme that is orthogonal in spatial axes to accelerate the computation across the entire VLM. Furthermoer, we integrate a spectrum pruning unit into LLM through an attention/similarityfree low-pass filter, which gradually prunes redundant visual tokens and is fully compatible to modern FlashAttention. On this basis, we propose Lossless Ultimate Vision tokens Compression (LUVC) framework. LUVC systematically compresses visual tokens until complete elimination at the final layer of LLM, so that the high-dimensional visual features are gradually fused into the multimodal queries. The experiments show that LUVC achieves a 2× speedup inference in language model with negligible accuracy degradation, and the training-free characteristic enables immediate deployment across multiple VLMs.

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut