A Mathematical Formalization of Self-Determining Agency

A Mathematical Formalization of Self-Determining Agency
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Defining agency is an extremely important challenge for cognitive science and artificial intelligence. Physics generally describes mechanical happenings, but there remains an unbridgeable gap between these and the acts of agents. To discuss the morality and responsibility of agents, it is necessary to model acts; whether such responsible acts can be fully explained by physical determinism remains an ongoing debate. Although we have already proposed a physical agent determinism model that appears to go beyond mere mechanical happenings, we have not yet established a strict mathematical formalism to eliminate ambiguity. Here, we explain why a physical system can follow coarse-graining agent-level determination without violating physical laws by formulating supervenient causation. Generally, supervenience including coarse graining does not change without a change in its lower base; therefore, a single supervenience alone cannot define supervenient causation. We define supervenient causation as the causal efficacy from the supervenience level to its lower base level. Although an algebraic expression composed of the multiple supervenient functions does supervenes on the base, an index sequence that determines the algebraic expression does not supervene on the base. Therefore, the sequence can possess unique dynamical laws that are independent of the lower base level. This independent dynamics creates the possibility for temporally preceding changes at the supervenience level to cause changes at the lower base level. Such a dual-laws system is considered useful for modeling self-determining agents such as humans.


💡 Research Summary

The paper tackles the longstanding problem of how to give a rigorous scientific account of agency—especially the kind of self‑determining agency that underlies moral responsibility—without abandoning the physicalist framework of modern science. The authors begin by classifying agency into four hierarchical levels: (1) goal‑directedness, (2) goal‑switching, (3) goal‑generation, and (4) agent‑determinism. While most contemporary AI work adopts an “intentional stance” that treats agents as black‑box observers, this approach does not explain the internal mechanism that initiates actions. The authors argue that only the fourth level, agent‑determinism, truly captures the phenomenon of self‑initiated acts, but it has never been formally defined.

To bridge the gap, the authors introduce the notion of supervenient causation. Traditional supervenience, borrowed from philosophy of mind, states that any change at the supervenient (higher) level must be accompanied by a change at the subvenient (lower) level, but it does not grant the higher level any causal power. The paper formalizes supervenient causation by separating the relationship into two mathematical layers:

  1. A set of supervenient functions ( {f_i} ) that map configurations of lower‑level physical variables to higher‑level descriptors.
  2. An index sequence ( \sigma = (\sigma_1,\sigma_2,\dots) ) that selects and combines those functions into a composite mapping ( S = \Phi_{\sigma}(f_1,\dots,f_n) ).

Crucially, the index sequence is not determined by the lower‑level dynamics; it follows its own independent dynamical law ( \mathcal{L}\sigma ). The lower‑level physical variables obey the usual physical law ( \mathcal{L}\phi ). Because ( \sigma ) can change before the physical state does, the higher‑level description can temporally precede and thereby cause changes in the lower‑level system. This creates a dual‑law system: one set of laws governing the physical substrate, another governing the abstract, coarse‑grained “agent” level.

The authors instantiate this abstract scheme with a feedback‑control interpretation. At the supervenient level, an “I” (the agent) sets relational goals (e.g., “raise the arm relative to the torso”). These goals are encoded as discrete states (beliefs, desires) that are updated according to past experience via the independent dynamics of ( \sigma ). The subvenient level then implements these relational goals by adjusting concrete physical variables (joint angles, muscle activations) through a standard control loop. Because the supervenient goals are relative rather than absolute, they avoid the need for an externally imposed objective scale, which the authors argue cannot be self‑determined.

Philosophically, the model unifies agent causation (the agent as the origin of action) and mental‑event causation (mental states causing physical events). The “I” provides the agent‑causation component, while the supervenient‑to‑subvenient feedback provides the mental‑event causation component. This mirrors Clarke’s (1993) integration of the two notions but grounds it in a precise mathematical framework based on token causation rather than type causation, preserving Pearl’s manipulability criterion without relaxing it.

The paper also discusses why single‑level causal accounts fail: they either lead to circular causation or infinite regress (the homunculus problem). By allowing the supervenient level its own autonomous dynamics, the model supplies a non‑circular source of initiation. The authors illustrate the idea with examples such as sleep, where the index sequence is frozen, preventing action, and waking, where a change in the supervenient goal re‑activates the physical control loop.

While the theoretical construction is thorough, the paper leaves several practical questions open. The specific form of the independent dynamics ( \mathcal{L}_\sigma ) is not specified, nor are algorithms for learning or updating the index sequence from experience. Moreover, empirical validation—whether in neural modeling, robotics, or AI systems—remains to be demonstrated.

In summary, the authors provide a novel formalization of supervenient causation that endows a coarse‑grained agent level with its own dynamical laws, thereby allowing temporally prior, higher‑level causes to affect lower‑level physical states without violating physical determinism. This dual‑law framework offers a promising route to model self‑determining agents, reconciling free‑will‑type agency with a fully physicalist ontology.


Comments & Academic Discussion

Loading comments...

Leave a Comment