How the symbol grounding of living organisms can be realized in artificial agents

A system with artificial intelligence usually relies on symbol manipulation, at least partly and implicitly. However, the interpretation of the symbols - what they represent and what they are about -

How the symbol grounding of living organisms can be realized in   artificial agents

A system with artificial intelligence usually relies on symbol manipulation, at least partly and implicitly. However, the interpretation of the symbols - what they represent and what they are about - is ultimately left to humans, as designers and users of the system. How symbols can acquire meaning for the system itself, independent of external interpretation, is an unsolved problem. Some grounding of symbols can be obtained by embodiment, that is, by causally connecting symbols (or sub-symbolic variables) to the physical environment, such as in a robot with sensors and effectors. However, a causal connection as such does not produce representation and aboutness of the kind that symbols have for humans. Here I present a theory that explains how humans and other living organisms have acquired the capability to have symbols and sub-symbolic variables that represent, refer to, and are about something else. The theory shows how reference can be to physical objects, but also to abstract objects, and even how it can be misguided (errors in reference) or be about non-existing objects. I subsequently abstract the primary components of the theory from their biological context, and discuss how and under what conditions the theory could be implemented in artificial agents. A major component of the theory is the strong nonlinearity associated with (potentially unlimited) self-reproduction. The latter is likely not acceptable in artificial systems. It remains unclear if goals other than those inherently serving self-reproduction can have aboutness and if such goals could be stabilized.


💡 Research Summary

The paper tackles the classic symbol‑grounding problem by arguing that merely embedding an artificial system in a physical environment (embodiment) does not suffice to give its symbols the kind of “aboutness” that human symbols possess. While sensors and effectors can create causal links between internal variables and external objects, such links alone do not generate representation or referential meaning. To explain how living organisms acquire genuine referential symbols, the author proposes a theory centered on strong nonlinearity associated with (potentially unlimited) self‑reproduction.

Self‑reproduction creates a feedback loop in which the system continuously rebuilds and modifies its own internal structure. Variations introduced during replication, together with selective pressures, force the internal variables to stay aligned with external regularities. This dynamic alignment is what endows symbols and sub‑symbolic variables with “aboutness”. Crucially, the theory predicts that reference can be made not only to concrete physical entities but also to abstract concepts, to non‑existent objects, and even to erroneous referents when the internal mapping diverges from the current environment. Such mis‑reference mirrors human hallucinations, fictional narratives, and scientific misconceptions.

The core insights can be summarized as follows: (1) Symbols are not static data items; they are emergent, continuously regenerated structures maintained by a self‑replicating process. (2) The self‑replicating loop is inherently nonlinear and, if unbounded, can generate an open‑ended space of goals. (3) Those goals are fundamentally tied to self‑preservation and self‑propagation; it remains unclear whether goals that do not serve reproduction can acquire stable aboutness.

When the author abstracts the biological mechanism for artificial agents, two major obstacles appear. First, contemporary robots and AI platforms lack the capacity for unlimited self‑replication due to hardware, energy, and safety constraints. Second, we have no mature theoretical or experimental tools to predict and control the highly nonlinear feedback that self‑replication entails. Consequently, the paper does not advocate a literal copy of biological reproduction but suggests extracting its essential principle—self‑referential, self‑adjusting representational dynamics—and implementing it in a constrained form. Possible routes include meta‑learning loops that mimic limited replication, evolutionary algorithms that evolve representations over generations, or self‑modifying neural architectures that continuously rewrite their own weights in response to environmental feedback.

Finally, the author raises a philosophical question: must “aboutness” always be coupled to a reproductive drive, or can non‑reproductive goals—such as artistic creation, scientific inquiry, or altruistic cooperation—develop genuine referential meaning? Human culture shows that symbols can detach from pure biological imperatives, hinting that artificial systems might also achieve aboutness through mechanisms other than self‑replication. The paper concludes that advancing AI beyond designer‑imposed semantics will require designing agents capable of generating, maintaining, and revising their own representational structures, a challenge that sits at the intersection of biology, dynamical systems theory, and AI engineering.


📜 Original Paper Content

🚀 Synchronizing high-quality layout from 1TB storage...