Beyond Early-Token Bias: Model-Specific and Language-Specific Position Effects in Multilingual LLMs
Large Language Models (LLMs) exhibit position bias systematically underweighting information based on its location in the context but how this bias varies across languages and models remains unclear. We conduct a multilingual study across five typologically diverse languages (English, Russian, German, Hindi, Vietnamese) and five model architectures, analyzing how position bias interacts with prompting strategies and affects output entropy. Our key findings are: (1) Position bias is primarily model-driven but shows language-specific nuances. Notably, Qwen2.5-7B-Instruct, DeepSeek 7B Chat and Mistral 7B consistently favor late positions challenging the common assumption of universal early-token preference. (2) Explicitly instructing the model, in the presence of irrelevant distractors, that “the most relevant context to the query is marked as 1” unexpectedly reduces accuracy across all languages, questioning standard prompt-engineering practices. (3) Accuracy consistently drops most when relevant information appears in the middle of the context, yet this is not reflected in a corresponding increase in output entropy, suggesting the model remains confident even when it fails to use mid-context cues.
💡 Research Summary
This paper investigates position bias—the systematic under‑weighting of information based on its location in the context—in multilingual large language models (LLMs). The authors evaluate five typologically diverse languages (English, Russian, German, Hindi, Vietnamese) across five open‑source LLM architectures (Qwen2.5‑7B‑Instruct, Llama3‑8B‑Instruct, DeepSeek‑7B‑Chat, Gemma‑7B‑it, Mistral‑7B‑Instruct). For each language they sample 2,000 question‑answer pairs and construct prompts containing five context passages, exactly one of which is relevant. The relevant passage is placed at the top, middle, or bottom of the list, and three scoring strategies are applied: Aligned (relevant passage labeled “1”), All‑Zero (all passages labeled “0”), and No‑Scores (no relevance scores). This yields nine experimental conditions per language, resulting in 450,000 model generations.
Performance is measured by accuracy and average predictive entropy (PE_avg), a token‑wise entropy normalized by sequence length. The key findings are: (1) Position bias is primarily model‑driven. Qwen2.5‑7B‑Instruct, DeepSeek‑7B‑Chat, and Mistral‑7B‑Instruct exhibit a strong late‑position bias, preferring relevant information at the bottom of the context, while Llama3‑8B‑Instruct shows the classic early‑position bias. The authors attribute these differences to variations in training data distribution, attention mechanisms, and positional encoding schemes. (2) Explicitly instructing the model that “the most relevant context is marked as 1” consistently degrades accuracy across all languages and models, even though the instruction is semantically correct. The degradation is attributed to the use of random distractors, which cause the model to over‑trust the label rather than the actual content, contrasting with prior work that used semantically relevant distractors. (3) Accuracy drops most sharply when the relevant passage is placed in the middle, yet the average predictive entropy does not increase correspondingly. This reveals a confidence‑accuracy gap: the model remains confident (low entropy) while failing to use mid‑context cues.
Language‑specific nuances are observed: overall accuracy follows English > German > Russian > Vietnamese > Hindi, and the middle‑position penalty is more pronounced for morphologically rich languages (Hindi, Vietnamese), suggesting tokenization and lexical diversity affect bias magnitude.
The paper discusses practical implications. Retrieval‑augmented generation pipelines should not assume a universal “recent or initial token priority” when reordering documents; model‑specific and language‑specific reordering strategies are needed. Chain‑of‑Thought prompting that includes explicit positional guidance may harm performance and should be applied cautiously. Finally, because entropy alone fails to capture the uncertainty associated with position bias, additional diagnostics such as attention‑weight analysis or position‑sensitivity metrics are recommended for future bias‑mitigation research. The authors release their code on GitHub for reproducibility.
Comments & Academic Discussion
Loading comments...
Leave a Comment