Reasoning has long been understood as a pathway between stages of understanding. Proper reasoning leads to understanding of a given subject. This reasoning was conceptualized as a process of understanding in a particular way, i.e., "symbolic reasoning". Foundational Models (FM) demonstrate that this is not a necessary condition for many reasoning tasks: they can "reason" by way of imitating the process of "thinking out loud", testing the produced pathways, and iterating on these pathways on their own. This leads to some form of reasoning that can solve problems on its own or with few-shot learning, but appears fundamentally different from human reasoning due to its lack of grounding and common sense, leading to brittleness of the reasoning process. These insights promise to substantially alter our assessment of reasoning and its necessary conditions, but also inform the approaches to safety and robust defences against this brittleness of FMs. This paper offers and discusses several philosophical interpretations of this phenomenon, argues that the previously apt metaphor of the "stochastic parrot" has lost its relevance and thus should be abandoned, and reflects on different normative elements in the safety- and appropriateness-considerations emerging from these reasoning models and their growing capacity.
In this paper, we critically examine both of these interpretations in the light of more recent state-of-the-art reasoning models. Reasoning models exhibit the ability to provide a reasoning explanation of their output and progress through this explanation in refining their output -i.e., based on their intermediate outputs, they can sequentially "reason" through the problem they are solving. Our guiding thesis is that these models are proving that there is a crucial difference between mere pretrained LLMs, that might be considered stochastic parrots, and these more advanced models, although they still fall short of any established well-understood interpretation that ascribes more substantial internal representation to their "reasoning". Thus, a new philosophically informed description of their processes is needed, which we call "simulated reasoning". This simulated reasoning should count as a subset of the complete reasoning processes humans perform. However, simulated reasoning is characterized by certain limits, which we aim to explore in this paper.
To substantiate this thesis, we proceed as follows. First, we introduce and characterize contemporary reasoning models and their features.
Second, we discuss how the metaphor of stochastic parrots has been successful at characterizing previous models and why it might now face limitations.
Third, we introduce and discuss several available interpretations of the process in question. This includes our own proposal -that simulated reasoning can be a form of reasoning -and describe some of its features and distinctions In this vein, some authors argue that reasoning models exhibit “abilities” that come with having a theory of mind (e.g., Moore et al. 2025). While we agree with LLMs that also should be reflected on within a normative analysis. As with previous iterations and generations of LLMs, this field ought to not be left to that label of “AI safety” alone, as safe AI is not being normatively sufficient (cf.
Kempt, Lavie & Nagel 2024 for an analysis for the lacking “safety normativity”).
In the following we will discuss 1)
the opportunities for improved safety by way of sequential computation, 2) the boundaries set for these models, and how their reasoning capabilities may be used to strengthen or undermine those boundaries, 3) the robustness of reasoning models and how the continued lack of common sense may affect safety precautions, 4) execution plans and the safety-concerns emerging from reasoning models that lack grounding and yet can formulate real-world plans.
The consequence of reasoning as a sequential process of inferences
We suggest that some parts of our understanding of the phenomenon of “reasoning” should be reconceptualized. This is to account for the fact that some forms of reasoning can purely be based on the imitation of reasoning-principles.
This content is AI-processed based on open access ArXiv data.