A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem

Reading time: 2 minute
...

📝 Original Info

  • Title: A Categorical Analysis of Large Language Models and Why LLMs Circumvent the Symbol Grounding Problem
  • ArXiv ID: 2512.09117
  • Date: 2025-12-09
  • Authors: Luciano Floridi, Yiyang Jia, Fernando Tohmé

📝 Abstract

This paper presents a formal, categorical framework for analysing how humans and large language models (LLMs) transform content into truth-evaluated propositions about a state space of possible worlds W , in order to argue that LLMs do not solve but circumvent the symbol grounding problem. Operating at an epistemological level of abstraction within the category of relations (Rel), we model the human route (H → C → Pred(W ))-consultation and interpretation of grounded contentand the artificial route The framework distinguishes syntax from semantics, represents meanings as propositions within Pred(W ) (the power set of W ), and defines success as soundness (entailment): the success set H ✓ ⊆ where the AI's output set P AI (h) is a subset of the human ground-truth set P human (h). We then locate failure modes at tokenisation, dataset construction, training generalisation, prompting ambiguity, inference stochasticity, and interpretation. On this basis, we advance the central thesis that LLMs lack unmediated access to W and therefore do not solve the symbol grounding problem. Instead, they circumvent it by exploiting pre-grounded human content. We further argue that apparent semantic competence is derivative of hu...

📄 Full Content

...(본문 내용이 길어 생략되었습니다. 사이트에서 전문을 확인해 주세요.)

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut