Computational Logic Foundations of KGP Agents
This paper presents the computational logic foundations of a model of agency called the KGP (Knowledge, Goals and Plan model. This model allows the specification of heterogeneous agents that can interact with each other, and can exhibit both proactive and reactive behaviour allowing them to function in dynamic environments by adjusting their goals and plans when changes happen in such environments. KGP provides a highly modular agent architecture that integrates a collection of reasoning and physical capabilities, synthesised within transitions that update the agents state in response to reasoning, sensing and acting. Transitions are orchestrated by cycle theories that specify the order in which transitions are executed while taking into account the dynamic context and agent preferences, as well as selection operators for providing inputs to transitions.
💡 Research Summary
The paper establishes a rigorous computational‑logic foundation for the Knowledge‑Goals‑Plans (KGP) model of agency, positioning it as a highly modular architecture capable of supporting heterogeneous agents that must operate in dynamic, unpredictable environments. At its core, a KGP agent’s state is split into three logically distinct components: a knowledge base (K) containing facts and rules, a set of goals (G) representing desired states, and a collection of plans (P) that encode action sequences intended to achieve those goals. These components are formalised using a combination of Abductive Logic Programming (ALP) and Constraint Logic Programming (CLP), allowing the system to handle incomplete information, hypotheses generation, and numeric or temporal constraints in a uniform way.
The model defines a suite of reasoning capabilities—Goal Introduction (GI), Plan Introduction (PI), Action Execution (AE), Observation (OB), and State Revision (SR)—each realised as a transition that transforms the agent’s state. Transitions are not fire‑and‑forget; they are governed by selection operators that take into account the current context, resource availability, and explicit preference structures. For example, the GI operator may generate new goals when a relevant trigger is detected, while the PI operator invokes a planner (expressed as a logical inference rule) to produce a plan that satisfies a selected goal under existing constraints.
Crucially, the order in which these transitions are applied is not hard‑wired. Instead, a meta‑level construct called a Cycle Theory specifies, in logical terms, the admissible sequences of transitions. Cycle theories encode priorities (e.g., “react to observations before planning”), urgency levels, cost considerations, and even inter‑agent coordination constraints. Because Cycle Theories are themselves logic programs, they can be dynamically revised, enabling agents to adapt their control flow as the environment evolves or as higher‑level policies change. This stands in contrast to traditional BDI agents, whose deliberation cycles are fixed and therefore less flexible in the face of rapid change.
Physical capabilities (sensors, actuators) are cleanly separated from reasoning capabilities. A sensor module supplies observations that feed the OB transition; an actuator module executes actions produced by AE. This separation permits heterogeneous integration: a robot, a software service, or a hybrid system can all be wrapped in the same KGP framework simply by providing the appropriate physical capability modules, without altering the underlying logical machinery.
From a formal perspective, the authors provide both a declarative semantics (defining the logical properties of states before and after each transition) and an operational semantics (mapping transitions to concrete inference steps in an ALP/CLP engine). This dual semantics enables formal verification of properties such as safety (no illegal actions), liveness (goals eventually become satisfied under reasonable assumptions), and consistency (knowledge base remains non‑contradictory).
The paper also discusses how KGP supports both proactive behavior (generating and pursuing goals autonomously) and reactive behavior (responding immediately to unexpected observations). By allowing the Cycle Theory to interleave GI, OB, and SR transitions based on situational cues, an agent can suspend a long‑term plan to address an emergency, then resume its original objectives once the crisis is resolved.
In summary, the KGP model unifies knowledge representation, goal management, planning, and execution within a single computational‑logic framework. Its transition‑based architecture, governed by dynamically configurable Cycle Theories and supported by robust selection operators, offers a level of adaptability and formal rigor that surpasses many existing agent architectures. The work therefore provides a solid theoretical basis for building reliable, flexible, and verifiable multi‑agent systems capable of operating in complex, real‑world domains.