Testing Booleanity and the Uncertainty Principle

Testing Booleanity and the Uncertainty Principle

Let f:{-1,1}^n -> R be a real function on the hypercube, given by its discrete Fourier expansion, or, equivalently, represented as a multilinear polynomial. We say that it is Boolean if its image is in {-1,1}. We show that every function on the hypercube with a sparse Fourier expansion must either be Boolean or far from Boolean. In particular, we show that a multilinear polynomial with at most k terms must either be Boolean, or output values different than -1 or 1 for a fraction of at least 2/(k+2)^2 of its domain. It follows that given oracle access to f, together with the guarantee that its representation as a multilinear polynomial has at most k terms, one can test Booleanity using O(k^2) queries. We show an \Omega(k) queries lower bound for this problem. Our proof crucially uses Hirschman’s entropic version of Heisenberg’s uncertainty principle.


💡 Research Summary

The paper investigates the problem of testing whether a real‑valued function on the Boolean hypercube, given by its multilinear (Fourier) expansion, is Boolean—that is, its range is confined to {-1,1}. The authors focus on the regime where the Fourier expansion is sparse: the polynomial representation contains at most (k) non‑zero monomials. Under this sparsity assumption they prove a sharp dichotomy: either the function is perfectly Boolean, or it deviates from Booleanity on a non‑negligible fraction of inputs. More precisely, if the polynomial has at most (k) terms and is not Boolean, then at least a fraction (2/(k+2)^2) of the hypercube points produce values outside {-1,1}.

The proof hinges on Hirschman’s entropic version of Heisenberg’s uncertainty principle. By interpreting the Fourier coefficients as a probability distribution, the authors bound the product of the Shannon entropies of the function’s value distribution and its spectral distribution. Sparsity forces the spectral entropy to be low, which in turn forces the value distribution to have sufficiently high entropy unless the function is concentrated on {-1,1}. This entropy argument yields the quantitative lower bound on the “non‑Boolean mass” of the function.

From the structural result they derive an algorithmic testing procedure. Assuming oracle access to the function and the guarantee that its multilinear representation uses at most (k) monomials, the tester draws (O(k^2)) random inputs and checks whether each output lies in {-1,1}. By the previously established lower bound, if the function is not Boolean the probability of observing a non‑Boolean output in a single random trial is at least (2/(k+2)^2). Standard concentration bounds (e.g., Chernoff) then guarantee that (O(k^2)) samples suffice to detect non‑Booleanity with high confidence.

The authors also prove a matching lower bound on query complexity: any algorithm that distinguishes Boolean from non‑Boolean functions under the same sparsity promise must make (\Omega(k)) queries. This lower bound is obtained via an information‑theoretic argument that constructs a family of functions differing on a small set of inputs yet indistinguishable with fewer than (c k) queries for a suitable constant (c). Consequently, the presented tester is optimal up to a quadratic factor in the number of queries.

In the context of prior work, most Booleanity testing results assume bounds on the total degree or on the full number of Fourier coefficients, which can be overly pessimistic for sparse polynomials. By measuring complexity via the number of monomials, the paper aligns more closely with practical scenarios such as low‑complexity circuit verification, quantized neural networks, and cryptographic proof systems where the underlying functions naturally have few terms.

Finally, the paper highlights the broader methodological contribution: the combination of an entropic uncertainty principle with combinatorial Fourier analysis provides a powerful tool for deriving quantitative structural properties of Boolean functions. This approach may be extended to other testing problems, such as checking low‑influence, junta properties, or spectral concentration, especially when sparsity constraints are present. The work thus opens a promising avenue for future research at the intersection of harmonic analysis, information theory, and property testing.