Selectivity in Probabilistic Causality: Where Psychology Runs Into Quantum Physics

Selectivity in Probabilistic Causality: Where Psychology Runs Into   Quantum Physics
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Given a set of several inputs into a system (e.g., independent variables characterizing stimuli) and a set of several stochastically non-independent outputs (e.g., random variables describing different aspects of responses), how can one determine, for each of the outputs, which of the inputs it is influenced by? The problem has applications ranging from modeling pairwise comparisons to reconstructing mental processing architectures to conjoint testing. A necessary and sufficient condition for a given pattern of selective influences is provided by the Joint Distribution Criterion, according to which the problem of “what influences what” is equivalent to that of the existence of a joint distribution for a certain set of random variables. For inputs and outputs with finite sets of values this criterion translates into a test of consistency of a certain system of linear equations and inequalities (Linear Feasibility Test) which can be performed by means of linear programming. While new in the behavioral context, both this test and the Joint Distribution Criterion on which it is based have been previously proposed in quantum physics, in dealing with generalizations of Bell inequalities for the quantum entanglement problem. The parallels between this problem and that of selective influences in behavioral sciences is established by observing that noncommuting measurements in quantum physics are mutually exclusive and can therefore be treated as different levels of one and the same factor.


💡 Research Summary

The paper tackles a fundamental problem that arises in many scientific domains: given several input factors (e.g., stimulus parameters) and several stochastic outputs that are not mutually independent (e.g., response variables), how can one determine which inputs influence which outputs? While the answer is trivial when the outputs are independent, real‑world data—whether from psychology, economics, or physics—often violate this assumption. The authors develop a rigorous probabilistic framework for “selective influences” and provide a concrete, testable criterion for its validity.

First, the authors formalize the experimental design. A finite set of factors Φ = {α₁,…,α_m} is introduced, each factor α having a finite set of levels (factor points). A treatment φ is a tuple that selects one level from each factor; the collection of admissible treatments is denoted T and need not be the full Cartesian product (incomplete designs are allowed). For each treatment φ, a vector of jointly distributed random variables A(φ) = (A₁,…,A_n)(φ) is observed. Different treatments are mutually exclusive, so there is no a priori joint distribution across treatments.

The central object is a selective‑influence diagram, a mapping M that assigns to each output A_i a subset Φ_i of factors. In the simplest “bijective” case each output is linked to exactly one factor, yielding a diagram α₁ → A₁, …, α_n → A_n. The question is whether such a diagram correctly captures the causal structure of the experiment.

To answer this, the authors introduce the Joint Distribution Criterion (JDC). The criterion states that the diagram is valid if and only if there exists a collection of latent random variables H_{xα}—one for every factor level xα—such that for every treatment φ the joint distribution of the corresponding latent variables (H_{φ{α₁}},…,H_{φ{α_n}}) coincides with the observed distribution of (A₁,…,A_n)(φ). In other words, a single global joint distribution over all latent variables must exist that reproduces every observed treatment distribution as a marginal. The latent entity R appearing in the original definition of selective influence can always be taken to be an ordinary random variable (Theorem 2.3), eliminating any need for abstract “random entities.”

When the factor levels and output values are finite, the existence of such a joint distribution can be expressed as a linear feasibility problem. One introduces variables representing the probabilities of each possible configuration of the latent H‑variables and writes linear constraints that enforce (i) non‑negativity and normalization, and (ii) equality of the induced marginals with the empirically observed treatment distributions. The resulting system of linear equations and inequalities can be solved by standard linear programming methods. The authors call this the Linear Feasibility Test (LFT).

Remarkably, the same mathematical structure appears in quantum physics. In Bell‑type experiments the measurements are non‑commuting and therefore mutually exclusive; each measurement setting can be treated as a distinct level of a factor. The question of whether a “local hidden‑variable” (classical) explanation exists for the observed correlations is precisely whether a joint distribution over all measurement outcomes exists—i.e., whether the JDC holds. Consequently, violations of Bell inequalities correspond to the failure of the selective‑influence diagram in the quantum context. The paper thus establishes a deep analogy: the problem of selective influences in behavioral sciences is mathematically identical to the problem of classical explanations for quantum entanglement.

Beyond the conceptual bridge, the authors discuss practical implications. The framework applies to a wide range of psychological models: response‑time decomposition, Thurstonian scaling, parallel‑serial mental architectures, same‑different judgments, and conjoint testing. It also accommodates incomplete designs, where only a subset of all possible factor‑level combinations is experimentally realizable. In such cases the canonical rearrangement of factors ensures that each output can still be associated with a single (possibly dummy) factor, preserving the applicability of the JDC and LFT.

Statistical considerations are briefly addressed: the JDC and LFT are population‑level statements; sample‑based testing would require additional procedures (e.g., bootstrapping or likelihood‑ratio tests), which are outside the scope of the present work.

In summary, the paper provides:

  1. A precise definition of selective influences using the Joint Distribution Criterion.
  2. A constructive method (Linear Feasibility Test) for verifying the criterion when variables are finite.
  3. A demonstration that this method is mathematically equivalent to the Bell‑type analysis of quantum entanglement, thereby linking behavioral causality and quantum physics.
  4. Extensions to incomplete experimental designs and a discussion of potential applications across psychology, economics, and physics.

By translating a traditionally philosophical question about causal selectivity into a concrete linear‑programming problem, the authors open the door to systematic, algorithmic verification of causal structures in complex stochastic systems, and they reveal a surprising unity between the logic of human cognition and the foundations of quantum theory.


Comments & Academic Discussion

Loading comments...

Leave a Comment