Towards common-sense reasoning via conditional simulation: legacies of Turing in Artificial Intelligence

Towards common-sense reasoning via conditional simulation: legacies of   Turing in Artificial Intelligence

The problem of replicating the flexibility of human common-sense reasoning has captured the imagination of computer scientists since the early days of Alan Turing’s foundational work on computation and the philosophy of artificial intelligence. In the intervening years, the idea of cognition as computation has emerged as a fundamental tenet of Artificial Intelligence (AI) and cognitive science. But what kind of computation is cognition? We describe a computational formalism centered around a probabilistic Turing machine called QUERY, which captures the operation of probabilistic conditioning via conditional simulation. Through several examples and analyses, we demonstrate how the QUERY abstraction can be used to cast common-sense reasoning as probabilistic inference in a statistical model of our observations and the uncertain structure of the world that generated that experience. This formulation is a recent synthesis of several research programs in AI and cognitive science, but it also represents a surprising convergence of several of Turing’s pioneering insights in AI, the foundations of computation, and statistics.


💡 Research Summary

**
The paper presents a unified computational framework for common‑sense reasoning built around a probabilistic Turing machine called QUERY. The authors begin by revisiting the long‑standing hypothesis that cognition is a form of computation, tracing its intellectual lineage back to Alan Turing’s early work on “oracle machines,” “learning machines,” and the philosophical foundations of artificial intelligence. They argue that the type of computation required for human‑like common sense must be inherently probabilistic, because everyday reasoning constantly deals with uncertain observations and an ill‑defined world model.

QUERY is defined as a higher‑order probabilistic program that takes another probabilistic program (the “model”) as input. It executes the model repeatedly, generating full possible worlds, and discards any execution that fails to satisfy a user‑specified condition (e.g., an observed fact). The remaining executions constitute samples from the conditional distribution of the model given the condition. This “conditional simulation” is conceptually similar to rejection sampling, but because the model itself can encode arbitrary stochastic control flow, data structures, and even its own structure, QUERY can perform inference over both variables and model topology simultaneously. Consequently, QUERY subsumes traditional Bayesian networks, which require a fixed graph structure, and standard Monte‑Carlo methods, which typically treat the model as static.

To demonstrate the expressive power of QUERY, the paper walks through three illustrative domains.

  1. Biological taxonomy – The statement “birds can fly” is encoded as a probabilistic rule. Given observations about a particular bird’s species and wing morphology, QUERY infers the probability that the bird can fly, automatically handling exceptions (e.g., ostriches, penguins) without hand‑crafted exception lists.
  2. Everyday causal reasoning – Using the rule “rain makes roads slippery,” the authors condition on observed road‑wetness and accident reports. QUERY produces a posterior distribution over the causal strength of rain, illustrating how common‑sense causal judgments emerge from conditional simulation of a generative world model.
  3. Social decision‑making – The heuristic “hungry people go to a restaurant” is modeled with latent preferences, price sensitivity, and distance. Conditioning on a person’s hunger level and the set of nearby eateries yields a distribution over restaurant choice, capturing the probabilistic nature of human planning.

Each case shows how QUERY reproduces the human ability to evaluate possibilities, test hypotheses, and update beliefs when new evidence arrives. Importantly, the framework does not require explicit enumeration of all possible variables or a pre‑specified independence structure; the program itself defines the space of hypotheses, and the conditioning operation automatically discovers the relevant dependencies.

The authors then compare QUERY with existing probabilistic programming languages (e.g., Church, Anglican) and inference algorithms such as Markov Chain Monte Carlo (MCMC) and importance sampling. While those systems also support conditional inference, they often separate model definition from inference engine and rely on sophisticated sampling schemes to approximate the posterior. QUERY blurs this distinction: the same program that generates data also performs inference by virtue of its execution under a condition. This leads to a more natural representation of “learning” as repeated conditional simulation, echoing Turing’s vision of machines that improve by interacting with an external oracle (the condition).

Finally, the paper positions QUERY as a modern synthesis of three of Turing’s legacies: (1) the formal notion of computability, (2) the early conceptualization of machine learning, and (3) the statistical perspective on inference. By embedding conditional simulation within a universal computational substrate, QUERY offers a principled route to endow artificial agents with the flexibility and adaptability characteristic of human common sense. The authors conclude by outlining future research directions, including scaling QUERY to high‑dimensional perception tasks, integrating symbolic knowledge bases, and applying the framework to autonomous robotics, conversational agents, and automated scientific discovery.