The Equivalence of Sampling and Searching
In a sampling problem, we are given an input x, and asked to sample approximately from a probability distribution D_x. In a search problem, we are given an input x, and asked to find a member of a non
In a sampling problem, we are given an input x, and asked to sample approximately from a probability distribution D_x. In a search problem, we are given an input x, and asked to find a member of a nonempty set A_x with high probability. (An example is finding a Nash equilibrium.) In this paper, we use tools from Kolmogorov complexity and algorithmic information theory to show that sampling and search problems are essentially equivalent. More precisely, for any sampling problem S, there exists a search problem R_S such that, if C is any “reasonable” complexity class, then R_S is in the search version of C if and only if S is in the sampling version. As one application, we show that SampP=SampBQP if and only if FBPP=FBQP: in other words, classical computers can efficiently sample the output distribution of every quantum circuit, if and only if they can efficiently solve every search problem that quantum computers can solve. A second application is that, assuming a plausible conjecture, there exists a search problem R that can be solved using a simple linear-optics experiment, but that cannot be solved efficiently by a classical computer unless the polynomial hierarchy collapses. That application will be described in a forthcoming paper with Alex Arkhipov on the computational complexity of linear optics.
💡 Research Summary
The paper investigates two fundamental computational tasks—sampling and searching—and establishes that they are essentially interchangeable under very general conditions. A sampling problem S is defined by a family of probability distributions {Dₓ} indexed by inputs x; the goal is to output a sample y whose distribution is ε‑close (in total variation distance) to Dₓ, using only polynomial resources. A search problem R is defined by a family of non‑empty solution sets {Aₓ}; the goal is to output any y∈Aₓ with success probability at least 1‑δ, again within polynomial time. The authors’ central claim is that for every sampling problem S there exists a search problem Rₛ such that, for any “reasonable” complexity class C (including deterministic, randomized, and quantum polynomial‑time classes), S belongs to the sampling version of C if and only if Rₛ belongs to the search version of C.
The construction of Rₛ relies on Kolmogorov complexity. For a fixed input x, the authors consider the conditional Kolmogorov complexity K(y|x) of a candidate sample y. They fix a threshold t that depends polynomially on the input length, the allowed error ε, and the failure probability δ. The solution set Aₓ is defined as all y drawn from Dₓ whose conditional complexity exceeds t. By standard counting arguments, Aₓ is guaranteed to be non‑empty with overwhelming probability, because a random draw from Dₓ will be incompressible with respect to x with high probability. Consequently, any algorithm that can produce a high‑complexity sample from Dₓ solves the associated search problem, and conversely, any algorithm that can find a high‑complexity element of Aₓ can be turned into a sampler for Dₓ by repeatedly invoking the search routine and appropriately mixing the outputs.
The paper proves two directions of reduction. (1) If C contains a sampler for S, then by running the sampler multiple times and selecting a sample whose conditional complexity exceeds t (which can be verified using any standard compression test), one obtains a C‑search algorithm for Rₛ. The number of repetitions needed is polynomial because the probability of obtaining a high‑complexity sample is bounded away from zero. (2) If C contains a search algorithm for Rₛ, then one can simulate a sampler for S by invoking the search algorithm a polynomial number of times, collecting the returned high‑complexity elements, and outputting a uniformly random element among them. The high‑complexity guarantee ensures that the distribution of the output is statistically close to Dₓ; the error analysis shows that the total variation distance can be made ≤ε by choosing appropriate parameters.
A crucial technical contribution is the careful selection of the threshold t. The authors show that t can be set to be roughly |x|+log(1/ε)+log(1/δ) plus a modest additive constant, which guarantees that the set of strings with K(y|x)≥t has size at least (1‑δ)·|supp(Dₓ)|. They also discuss how to test the complexity condition without computing Kolmogorov complexity directly, using universal compressors or pseudo‑randomness tests that are efficiently implementable within class C.
With this equivalence in hand, the authors derive several notable corollaries. First, they prove that SampP = SampBQP (the class of distributions efficiently samplable by classical versus quantum polynomial‑time machines) if and only if FBPP = FBQP (the class of function problems solvable with bounded‑error probabilistic versus quantum polynomial‑time algorithms). In other words, the ability of classical computers to sample the output distribution of any quantum circuit is exactly as hard as solving every search problem that a quantum computer can solve, and vice versa. This bridges a gap between two previously distinct lines of research on quantum advantage.
Second, assuming the widely believed conjecture underlying the BosonSampling model (Aaronson‑Arkhipov), the authors argue that there exists a search problem R that can be solved by a simple linear‑optics experiment (i.e., by physically sampling photons passing through a passive interferometer) but that cannot be solved by any classical polynomial‑time algorithm unless the polynomial hierarchy collapses. While the detailed proof of this claim is deferred to a forthcoming joint paper with Alex Arkhipov, the present work outlines the high‑level reduction: the linear‑optics device naturally produces samples from a distribution with extremely high Kolmogorov complexity, which correspond to solutions of a search problem that is provably hard for classical algorithms under standard complexity‑theoretic assumptions.
Overall, the paper provides a robust, information‑theoretic framework that unifies sampling and search. By leveraging Kolmogorov complexity, it shows that the “hardness” of generating a distribution and the “hardness” of finding a particular high‑complexity element are two sides of the same coin. This insight not only clarifies the relationship between classical and quantum computational power but also opens new avenues for designing cryptographic primitives and experimental demonstrations of quantum supremacy based on search‑type tasks rather than pure sampling. The work stands as a significant conceptual advance, linking algorithmic information theory with computational complexity and quantum physics.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...