Not only a lack of right definitions: Arguments for a shift in information-processing paradigm
Machine Consciousness and Machine Intelligence are not simply new buzzwords that occupy our imagination. Over the last decades, we witness an unprecedented rise in attempts to create machines with human-like features and capabilities. However, despite widespread sympathy and abundant funding, progress in these enterprises is far from being satisfactory. The reasons for this are twofold: First, the notions of cognition and intelligence (usually borrowed from human behavior studies) are notoriously blurred and ill-defined, and second, the basic concepts underpinning the whole discourse are by themselves either undefined or defined very vaguely. That leads to improper and inadequate research goals determination, which I will illustrate with some examples drawn from recent documents issued by DARPA and the European Commission. On the other hand, I would like to propose some remedies that, I hope, would improve the current state-of-the-art disgrace.
💡 Research Summary
The paper argues that the disappointing progress in machine consciousness and machine intelligence is not merely a matter of insufficient funding or technical skill, but stems from two deep-rooted conceptual problems. First, the core notions of “cognition,” “intelligence,” and “consciousness” are borrowed from human‑behavior research without a rigorous, shared definition. In psychology, neuroscience, philosophy, and artificial‑intelligence (AI) communities these terms carry different connotations, yet AI research routinely treats them as interchangeable. This conceptual fuzziness leads to inconsistent experimental designs, ambiguous performance metrics, and ultimately to research goals that are poorly specified.
Second, the underlying information‑processing paradigm that frames AI research is itself ill‑defined. The dominant input‑output function‑mapping model fails to capture the hierarchical, dynamic, and context‑sensitive nature of biological cognition. Because the paradigm lacks a clear theoretical foundation, the field has split into two divergent tracks—brain‑inspired (neurosymbolic) and purely statistical learning—each pursuing its own set of objectives and evaluation criteria that are often mutually incompatible.
To illustrate the practical consequences of these problems, the author examines two recent policy documents. The U.S. Defense Advanced Research Projects Agency (DARPA) program “Machine Common Sense” promises AI systems that possess human‑like common sense, yet it provides no precise definition of “common sense.” Consequently, project selection and success criteria are reduced to quantitative proxies such as data volume or processing speed, which do not guarantee genuine inferential abilities. Similarly, the European Commission’s draft “Artificial Intelligence Act” emphasizes “trustworthiness” and “transparency” for high‑risk AI, but the lack of concrete definitions makes regulatory enforcement ambiguous and can either stifle innovation or allow unsafe systems to slip through the cracks.
In response, the paper proposes a comprehensive paradigm shift based on a hierarchical, modular information‑processing architecture. The architecture is divided into three layers: (1) Sensory‑Preprocessing, which normalizes raw inputs and extracts low‑level features; (2) Pattern‑Inference, which employs probabilistic and statistical models to recognize structures and make predictions; and (3) Goal‑Value‑Integration, which combines long‑term objectives, utility functions, and ethical constraints to select actions. Each layer is assigned a formal, domain‑specific definition and a set of performance metrics that go beyond simple accuracy—metrics such as information gain, Shannon entropy reduction, and Bayesian posterior probabilities are recommended.
The author further advocates for a meta‑definition framework that unifies formal logic, Bayesian probability, and dynamical systems theory to produce precise, machine‑readable specifications of “cognition,” “learning,” and “decision.” By grounding experimental protocols and benchmark suites in this meta‑definition, researchers can evaluate AI systems on both quantitative and qualitative dimensions, ensuring that progress reflects genuine advances in information‑processing principles rather than superficial scaling of data or compute.
A key implication of this re‑orientation is that research goals shift from “building a system that mimics a particular human ability” to “discovering and engineering the underlying processing principles that enable such abilities.” This reframing promises more robust, reproducible science and provides a clearer pathway toward eventual machine consciousness.
Finally, the paper stresses the necessity of interdisciplinary collaboration. Cognitive scientists, neuroscientists, computer scientists, philosophers, and policy makers must jointly craft the definitions, standards, and regulatory language that will guide future research. By addressing the current “definition vacuum,” the community can avoid wasted resources, align funding with scientifically meaningful objectives, and lay a solid foundation for the next generation of intelligent machines.
Comments & Academic Discussion
Loading comments...
Leave a Comment