SVD-ViT: Does SVD Make Vision Transformers Attend More to the Foreground?
Vision Transformers (ViT) have been established as large-scale foundation models. However, because self-attention operates globally, they lack an explicit mechanism to distinguish foreground from background. As a result, ViT may learn unnecessary background features and artifacts, leading to degraded classification performance. To address this issue, we propose SVD-ViT, which leverages singular value decomposition (SVD) to prioritize the learning of foreground features. SVD-ViT consists of three components-\textbf{SPC module}, \textbf{SSVA}, and \textbf{ID-RSVD}-and suppresses task-irrelevant factors such as background noise and artifacts by extracting and aggregating singular vectors that capture object foreground information. Experimental results demonstrate that our method improves classification accuracy and effectively learns informative foreground representations while reducing the impact of background noise.
💡 Research Summary
Vision Transformers (ViT) have become the de‑facto backbone for many vision tasks, yet their global self‑attention mechanism lacks an explicit inductive bias for separating foreground from background. Consequently, ViT often incorporates irrelevant background features and high‑norm artifacts, which can degrade classification performance, especially on fine‑grained datasets.
To address this, the authors propose SVD‑ViT, a novel architecture that injects singular value decomposition (SVD) directly into the feature extraction pipeline of ViT. The method consists of three tightly coupled components:
- SPC (Spatial Principal Component) module – After a chosen Transformer encoder layer, the token matrix (X\in\mathbb{R}^{(1+N)\times C}) (including the
Comments & Academic Discussion
Loading comments...
Leave a Comment