Retrieval Quality at Context Limit

Reading time: 1 minute
...

📝 Original Info

  • Title: Retrieval Quality at Context Limit
  • ArXiv ID: 2511.05850
  • Date: 2025-11-08
  • Authors: ** - 논문에 명시된 저자 정보가 제공되지 않았습니다. (제목·초록만 제공됨) **

📝 Abstract

The ability of large language models (LLMs) to recall and retrieve information from long contexts is critical for many real-world applications. Prior work (Liu et al., 2023) reported that LLMs suffer significant drops in retrieval accuracy for facts placed in the middle of large contexts, an effect known as "Lost in the Middle" (LITM). We find the model Gemini 2.5 Flash can answer needle-in-a-haystack questions with great accuracy regardless of document position including when the document is nearly at the input context limit. Our results suggest that the "Lost in the Middle" effect is not present for simple factoid Q\&A in Gemini 2.5 Flash, indicating substantial improvements in long-context retrieval.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut