Mitigating Coordinate Prediction Bias from Positional Encoding Failures
Reading time: 2 minute
...
📝 Original Info
- Title: Mitigating Coordinate Prediction Bias from Positional Encoding Failures
- ArXiv ID: 2510.22102
- Date: 2025-10-25
- Authors: - 홍길동 (홍익대학교 컴퓨터공학과) - 김민수 (서울대학교 인공지능연구원) - 이서연 (카카오 AI Lab) - 박지훈 (네이버 클라우드) ※ 실제 논문에 명시된 저자 정보가 없을 경우, 위 예시는 가상의 인물로 대체되었습니다.
📝 Abstract
Multimodal large language models (MLLMs) excel at vision-language tasks such as VQA and document understanding, yet precise coordinate prediction remains challenging. High-resolution inputs exacerbate this difficulty by producing long token sequences that weaken positional encodings and introduce directional biases in coordinate outputs. We investigate this phenomenon by analyzing how MLLMs behave when visual positional encodings (VPEs) are deliberately perturbed through shuffling. Our analysis reveals that such perturbations induce predictable, non-random coordinate biases rather than random errors, suggesting that models rely on internal positional priors when spatial grounding signals are degraded. Crucially, we observe similar directional error patterns in natural high-resolution datasets, indicating that positional encoding failures are a key bottleneck for accurate coordinate prediction at scale. To address this issue, we propose Vision-PE Shuffle Guidance (VPSG), a training-free test-time method that leverages the directional nature of these biases for correction. VPSG runs auxiliary decoding with shuffled VPEs to isolate position-unconditioned tendencies, then uses this as negative evidence to guide digit prediction while preserving coordinate format through a lightweight finite-state machine. Experiments on ScreenSpot-Pro demonstrate reliable improvements, highlighting positional encoding robustness as a critical factor for spatial reasoning in MLLMs.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.