Machine Cognition Models: EPAM and GPS

Machine Cognition Models: EPAM and GPS

Through history, the human being tried to relay its daily tasks to other creatures, which was the main reason behind the rise of civilizations. It started with deploying animals to automate tasks in the field of agriculture(bulls), transportation (e.g. horses and donkeys), and even communication (pigeons). Millenniums after, come the Golden age with “Al-jazari” and other Muslim inventors, which were the pioneers of automation, this has given birth to industrial revolution in Europe, centuries after. At the end of the nineteenth century, a new era was to begin, the computational era, the most advanced technological and scientific development that is driving the mankind and the reason behind all the evolutions of science; such as medicine, communication, education, and physics. At this edge of technology engineers and scientists are trying to model a machine that behaves the same as they do, which pushed us to think about designing and implementing “Things that-Thinks”, then artificial intelligence was. In this work we will cover each of the major discoveries and studies in the field of machine cognition, which are the “Elementary Perceiver and Memorizer”(EPAM) and “The General Problem Solver”(GPS). The First one focus mainly on implementing the human-verbal learning behavior, while the second one tries to model an architecture that is able to solve problems generally (e.g. theorem proving, chess playing, and arithmetic). We will cover the major goals and the main ideas of each model, as well as comparing their strengths and weaknesses, and finally giving their fields of applications. And Finally, we will suggest a real life implementation of a cognitive machine.


💡 Research Summary

The paper traces the long‑standing human desire to offload tasks onto other agents, beginning with the domestication of animals for agriculture, transport, and communication, moving through the medieval inventions of Al‑Jazari and other Muslim engineers, and culminating in the industrial revolution and the computational era of the late nineteenth century. Within this historical context, the authors focus on two seminal cognitive‑modeling systems that attempted to replicate human thought processes in machines: the Elementary Perceiver and Memorizer (EPAM) and the General Problem Solver (GPS).

EPAM, introduced by Robert M. Mellon and Frank Rau in the early 1960s, is a model of verbal learning. It decomposes an input stimulus into a set of primitive features, then routes these features through a discrimination network—a tree‑like structure that stores previously encountered patterns. When a new stimulus is presented, the network compares it with existing nodes; if a distinguishing feature is found, a new branch is created, thereby extending the memory. The system is divided into a Perceiver (feature extraction and discrimination) and a Memorizer (index‑based associative storage). EPAM successfully reproduces phenomena such as the “learning curve” and “stimulus generalization” observed in human subjects, and it has been applied to early natural‑language‑learning tasks, simple syntax acquisition, and human‑computer interaction prototypes. However, its reliance on a fixed feature set limits its ability to capture deep semantic relations, and the exponential growth of the discrimination tree leads to a curse of dimensionality. Moreover, EPAM is essentially a learning‑only architecture; it does not provide mechanisms for planning, inference, or abstract reasoning.

GPS, proposed by Allen Newell and Herbert A. Simon in 1959, embodies a domain‑independent problem‑solving framework. It formalizes a problem as a transformation from an initial state to a goal state within a defined problem space. The core algorithm is means‑ends analysis: the system identifies differences between the current and goal states, selects operators whose effects reduce those differences, and recursively applies them. Operators are described by preconditions and effects, allowing GPS to construct a search tree. Various search strategies—depth‑first, breadth‑first, and heuristic‑guided limited‑depth search—are combined to manage the combinatorial explosion. GPS was demonstrated on theorem proving, chess, arithmetic, and other tasks, showing that a single architecture could, in principle, solve a wide range of problems. Its weaknesses stem from the need for a complete, hand‑crafted operator library; without exhaustive domain knowledge, the system cannot discover novel strategies. The state space can still become intractably large, and the quality of the heuristic heavily influences performance, making generalization beyond the engineered domains difficult.

The comparative analysis highlights complementary strengths: EPAM excels at incremental learning and pattern discrimination, while GPS shines in goal‑directed planning and systematic search. Their limitations are also mirrored in modern AI—feature engineering versus end‑to‑end representation learning, and handcrafted search heuristics versus learned policies. The authors argue that contemporary research on meta‑learning, reinforcement learning, and neural‑symbolic integration can be viewed as attempts to fuse the EPAM‑style perceptual learning with GPS‑style means‑ends reasoning.

Applications are outlined accordingly. EPAM‑based systems are suitable for language‑learning software, adaptive user interfaces, and early natural‑language processing modules that require fast associative recall. GPS‑based techniques are appropriate for automated theorem provers, game AI, robotic path planning, and any domain where a clear goal state can be defined and a set of operators is available.

Finally, the paper proposes a concrete “cognitive‑planning hybrid machine” that combines the two paradigms. A perception module modeled after EPAM would continuously update a discrimination network with sensory data (e.g., visual features of objects). When a task is issued, a GPS‑style means‑ends analyzer would query this network to retrieve relevant operators and construct a plan that bridges the current state to the desired goal. As a proof‑of‑concept, the authors suggest a robotic arm that learns to recognize new objects on the fly (EPAM) and then autonomously generates a sequence of motions to grasp and relocate them (GPS). This architecture demonstrates how early cognitive models can inform the design of modern, adaptable AI systems that both learn from experience and reason about future actions.

In conclusion, EPAM and GPS represent foundational attempts to model human cognition—learning and problem solving—within machines. By revisiting their core ideas and integrating them with contemporary learning algorithms, researchers can develop more general, flexible, and human‑like artificial agents.