FlashVID: Efficient Video Large Language Models via Training-free Tree-based Spatiotemporal Token Merging

FlashVID: Efficient Video Large Language Models via Training-free Tree-based Spatiotemporal Token Merging
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Although Video Large Language Models (VLLMs) have shown remarkable capabilities in video understanding, they are required to process high volumes of visual tokens, causing significant computational inefficiency. Existing VLLMs acceleration frameworks usually compress spatial and temporal redundancy independently, which overlooks the spatiotemporal relationships, thereby leading to suboptimal spatiotemporal compression. The highly correlated visual features are likely to change in spatial position, scale, orientation, and other attributes over time due to the dynamic nature of video. Building on this insight, we introduce FlashVID, a training-free inference acceleration framework for VLLMs. Specifically, FlashVID utilizes Attention and Diversity-based Token Selection (ADTS) to select the most representative tokens for basic video representation, then applies Tree-based Spatiotemporal Token Merging (TSTM) for fine-grained spatiotemporal redundancy elimination. Extensive experiments conducted on three representative VLLMs across five video understanding benchmarks demonstrate the effectiveness and generalization of our method. Notably, by retaining only 10% of visual tokens, FlashVID preserves 99.1% of the performance of LLaVA-OneVision. Consequently, FlashVID can serve as a training-free and plug-and-play module for extending long video frames, which enables a 10x increase in video frame input to Qwen2.5-VL, resulting in a relative improvement of 8.6% within the same computational budget. Code is available at https://github.com/Fanziyang-v/FlashVID.


💡 Research Summary

Video Large Language Models (VLLMs) have demonstrated impressive capabilities in video understanding, yet their inference cost is dominated by the massive number of visual tokens generated from each frame. Because the self‑attention operation scales quadratically with sequence length, processing thousands of visual tokens per video quickly becomes prohibitive in both FLOPs and memory. Existing acceleration methods typically compress spatial redundancy (within a frame) and temporal redundancy (across frames) independently. This decoupled approach ignores the fact that semantically similar visual elements often move, change scale, or rotate over time, so the most correlated tokens in adjacent frames are rarely aligned at fixed spatial coordinates. Consequently, prior techniques either merge unrelated tokens (as in Temporal Token Merging, TTM) or discard important dynamic information, leading to sub‑optimal speed‑accuracy trade‑offs.

FlashVID addresses this gap with a completely training‑free, plug‑and‑play pipeline consisting of two synergistic modules: Attention and Diversity‑based Token Selection (ADTS) and Tree‑based Spatiotemporal Token Merging (TSTM).

ADTS operates frame‑wise. For each frame, it first computes a cosine distance matrix among all visual tokens. To avoid selecting tokens solely based on diversity, two calibration signals are introduced: (1) the attention weight of the


Comments & Academic Discussion

Loading comments...

Leave a Comment