Are language models aware of the road not taken? Token-level uncertainty and hidden state dynamics
📝 Original Info
- Title: Are language models aware of the road not taken? Token-level uncertainty and hidden state dynamics
- ArXiv ID: 2511.04527
- Date: 2025-11-06
- Authors: ** 제공된 정보에 저자 명단이 포함되어 있지 않습니다. **
📝 Abstract
When a language model generates text, the selection of individual tokens might lead it down very different reasoning paths, making uncertainty difficult to quantify. In this work, we consider whether reasoning language models represent the alternate paths that they could take during generation. To test this hypothesis, we use hidden activations to control and predict a language model's uncertainty during chain-of-thought reasoning. In our experiments, we find a clear correlation between how uncertain a model is at different tokens, and how easily the model can be steered by controlling its activations. This suggests that activation interventions are most effective when there are alternate paths available to the model -- in other words, when it has not yet committed to a particular final answer. We also find that hidden activations can predict a model's future outcome distribution, demonstrating that models implicitly represent the space of possible paths.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.