Rep2Text: Decoding Full Text from a Single LLM Token Representation
Reading time: 1 minute
...
📝 Original Info
- Title: Rep2Text: Decoding Full Text from a Single LLM Token Representation
- ArXiv ID: 2511.06571
- Date: 2025-11-09
- Authors: ** 논문에 명시된 저자 정보가 제공되지 않았습니다. **
📝 Abstract
Large language models (LLMs) have achieved remarkable progress across diverse tasks, yet their internal mechanisms remain largely opaque. In this work, we address a fundamental question: to what extent can the original input text be recovered from a single last-token representation within an LLM? We propose Rep2Text, a novel framework for decoding full text from last-token representations. Rep2Text employs a trainable adapter that projects a target model's internal representations into the embedding space of a decoding language model, which then autoregressively reconstructs the input text. Experiments on various model combinations (Llama-3.1-8B, Gemma-7B, Mistral-7B-v0.1, Llama-3.2-3B) demonstrate that, on average, over half of the information in 16-token sequences can be recovered from this compressed representation while maintaining strong semantic integrity and coherence. Furthermore, our analysis reveals an information bottleneck effect: longer sequences exhibit decreased token-level recovery while preserving strong semantic integrity. Besides, our framework also demonstrates robust generalization to out-of-distribution medical data.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.