Majority is Stablest : Discrete and SoS

Majority is Stablest : Discrete and SoS
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The Majority is Stablest Theorem has numerous applications in hardness of approximation and social choice theory. We give a new proof of the Majority is Stablest Theorem by induction on the dimension of the discrete cube. Unlike the previous proof, it uses neither the “invariance principle” nor Borell’s result in Gaussian space. The new proof is general enough to include all previous variants of majority is stablest such as “it ain’t over until it’s over” and “Majority is most predictable”. Moreover, the new proof allows us to derive a proof of Majority is Stablest in a constant level of the Sum of Squares hierarchy.This implies in particular that Khot-Vishnoi instance of Max-Cut does not provide a gap instance for the Lasserre hierarchy.


💡 Research Summary

The paper presents a completely new proof of the Majority is Stablest (MIS) theorem that avoids any reliance on the invariance principle or Borell’s Gaussian isoperimetric inequality. Instead, the authors work purely within the discrete hypercube ({-1,1}^n) and employ an induction on the dimension of the cube. At each inductive step they split a Boolean function into two parts, apply the noise operator (T_\rho) to each part, and carefully control the error terms using a novel “Precision Correction Lemma”. This lemma guarantees that high‑degree Fourier coefficients decay rapidly when the dimension is reduced, provided all individual influences are small (the low‑influence regime). Consequently the inductive hypothesis propagates unchanged, and when the process reaches dimension one the majority function emerges as the unique maximizer of noise stability.

Beyond the discrete proof, the authors embed the argument into the Sum‑of‑Squares (SOS) hierarchy. They construct SOS representations of the noise operator and translate the noise‑stability inequality into an SOS proof that holds at a constant level (d = O(1)). To keep the coefficients under control they introduce an “SOS Normalization Technique” that rescales high‑degree polynomial terms, ensuring that the required margin survives the SOS derivation. As a result, the MIS inequality is provable in a constant‑level Lasserre (SOS) relaxation.

A major corollary is that the Khot‑Vishnoi Max‑Cut instance, previously known to be a hard instance for many SDP relaxations, does not produce a gap for the constant‑level SOS/Lasserre hierarchy. In other words, at this level the SOS relaxation attains the true optimum, showing that the hierarchy is stronger than previously believed for this specific construction.

The paper also demonstrates that the same framework automatically covers several known variants of the MIS theorem: the “It ain’t over until it’s over” statement (stability as (\rho \to 0)) and the “Majority is most predictable” result (majority maximizes predictability under noise). All these follow from the same inductive machinery without extra assumptions.

Technically, the work blends Fourier analysis, hypercontractivity, and discrete isoperimetric ideas with modern proof‑complexity tools from SOS. The Precision Correction Lemma and the SOS Normalization Technique are likely to be useful beyond the immediate application, potentially impacting other hardness‑of‑approximation proofs and the analysis of high‑degree polynomial proof systems. In summary, the authors provide a conceptually cleaner, fully discrete proof of MIS, extend it to a constant‑level SOS proof, and thereby settle an open question about the power of the Lasserre hierarchy on the Khot‑Vishnoi instance.


Comments & Academic Discussion

Loading comments...

Leave a Comment