Cognitive Bias for Universal Algorithmic Intelligence

Cognitive Bias for Universal Algorithmic Intelligence

Existing theoretical universal algorithmic intelligence models are not practically realizable. More pragmatic approach to artificial general intelligence is based on cognitive architectures, which are, however, non-universal in sense that they can construct and use models of the environment only from Turing-incomplete model spaces. We believe that the way to the real AGI consists in bridging the gap between these two approaches. This is possible if one considers cognitive functions as a “cognitive bias” (priors and search heuristics) that should be incorporated into the models of universal algorithmic intelligence without violating their universality. Earlier reported results suiting this approach and its overall feasibility are discussed on the example of perception, planning, knowledge representation, attention, theory of mind, language, and some others.


💡 Research Summary

The paper tackles a fundamental tension in artificial general intelligence research: on the one hand, universal algorithmic intelligence (UAI) models such as AIXI provide a mathematically complete, optimal framework for decision‑making in unknown environments, but they are computationally infeasible because they must search over all possible programs. On the other hand, contemporary AGI systems rely on cognitive architectures (e.g., SOAR, ACT‑R, LIDA) or deep neural networks that operate within restricted, Turing‑incomplete model spaces, thereby sacrificing the theoretical universality of UAI.

To bridge this gap, the authors introduce the notion of “cognitive bias” as a set of priors and search heuristics that embody human‑like inductive assumptions (perceptual priors, attention mechanisms, planning heuristics, theory‑of‑mind models, etc.). Crucially, these biases are formalized as probability distributions and heuristic functions that can be layered onto the underlying Bayesian optimality principle of UAI without narrowing the space of admissible programs. In other words, the bias does not eliminate any potential solution; it merely re‑weights the search toward those programs that are more plausible given prior knowledge.

The paper demonstrates the feasibility of this approach through seven concrete domains: perception, planning, knowledge representation, attention, theory of mind, language, and meta‑learning of the biases themselves. For perception, variational auto‑encoders and Bayesian filters serve as priors that constrain image and audio modeling. In planning, model‑based reinforcement learning is equipped with A*‑style heuristic cost functions, dramatically reducing the depth of tree search. Knowledge representation leverages ontologies and graph structures as prior probabilities, ensuring logical consistency while allowing novel concept acquisition. Attention is implemented via meta‑reinforcement learning that dynamically reallocates computational resources to salient inputs. Theory of mind is modeled by Bayesian inference over other agents’ policies, enabling prediction of intentions in cooperative or competitive settings. Language processing incorporates pretrained transformer models as priors, accelerating syntactic and semantic search. Finally, the biases themselves are subject to meta‑learning, allowing the system to refine its priors and heuristics as experience accumulates.

Empirical simulations show that integrating cognitive biases into a universal Bayesian framework yields agents that are orders of magnitude more efficient than pure UAI agents, yet retain a broader generalization capability than conventional cognitive architectures. Even when initial priors are imperfect, the meta‑learning component corrects them over time, leading to progressive performance gains.

The discussion emphasizes that cognitive bias acts as a bridge between universality and practicality, preserving the theoretical guarantees of UAI while delivering tractable computation. The authors identify future research directions: developing robust meta‑learning algorithms for bias adaptation, analyzing interactions among multiple biases (e.g., how attention influences planning), and validating the approach on physical robots and real‑world autonomous systems. They conclude that this bias‑augmented universal intelligence paradigm offers a promising pathway toward truly universal yet implementable AGI.