Collapsing and Separating Completeness Notions under Average-Case and Worst-Case Hypotheses

Collapsing and Separating Completeness Notions under Average-Case and   Worst-Case Hypotheses
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This paper presents the following results on sets that are complete for NP. 1. If there is a problem in NP that requires exponential time at almost all lengths, then every many-one NP-complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. 2. If there is a problem in coNP that cannot be solved by polynomial-size nondeterministic circuits, then every many-one complete set is complete under length-increasing reductions that are computed by polynomial-size circuits. 3. If there exist a one-way permutation that is secure against subexponential-size circuits and there is a hard tally language in NP intersect coNP, then there is a Turing complete language for NP that is not many-one complete. Our first two results use worst-case hardness hypotheses whereas earlier work that showed similar results relied on average-case or almost-everywhere hardness assumptions. The use of average-case and worst-case hypotheses in the last result is unique as previous results obtaining the same consequence relied on almost-everywhere hardness results.


💡 Research Summary

This paper investigates the relationships between different notions of completeness for NP‑complete languages, focusing on many‑one reductions and Turing reductions, and it does so under relatively mild hardness assumptions. The authors obtain three main results.

  1. Length‑increasing many‑one reductions from worst‑case hardness.
    The first result shows that if there exists a language L in NP that requires exponential time on almost all input lengths—formally, for some ε>0, every algorithm deciding L needs more than 2^{εn} time on at least one string of each length—then every many‑one NP‑complete set is also complete under length‑increasing reductions that can be computed by polynomial‑size circuits (i.e., P/poly reductions). The proof builds an intermediate language S that encodes a triple (x, y, z) where x and z are hard instances of L, y is an instance of SAT, and a majority‑vote condition ties them together. Using the hardness of L, the authors argue that for each length n there must exist a pair (x, z) on which any many‑one reduction f from S to the target complete set A behaves honestly (the output length exceeds a prescribed fraction of the input length). This yields a deterministic, polynomial‑time, length‑increasing reduction from SAT to A that is implementable by polynomial‑size circuits. The same argument works under a slightly stronger hypothesis that the circuit complexity of L_n exceeds 2^{εn} for all sufficiently large n.

  2. Analogous result for coNP via nondeterministic circuit hardness.
    The second theorem replaces the NP hardness assumption with a coNP hardness assumption: there exists a language H in coNP that is not in NP/poly (i.e., no nondeterministic polynomial‑size circuit family decides H on all inputs of any length). By encoding tuples of strings from H into a language H′ and applying a similar majority construction, the authors obtain that every many‑one NP‑complete set is complete under length‑increasing P/poly reductions. The key technical step is a lemma showing that if H′ were in NP/poly then H would also be, contradicting the hypothesis.

  3. Separation of Turing completeness from many‑one completeness using average‑case and worst‑case hypotheses.
    The third contribution addresses a long‑standing open problem posed by Ladner, Lynch, and Selman (1975): does there exist a language that is Turing‑complete for NP but not many‑one complete? Prior work achieved this separation only under “almost‑everywhere” hardness assumptions, which are considered very strong. The authors show that the separation can be obtained from a combination of two more plausible hypotheses:

    • (a) The existence of a 2^{εn}‑secure one‑way permutation (an average‑case hardness assumption that yields pseudorandom generators).
    • (b) The existence of a language in NEEE ∩ coNEEE that does not belong to EEE/log (a worst‑case hardness assumption).
      Using the secure one‑way permutation, they construct quasi‑polynomial‑time, length‑increasing reductions, and with hypothesis (b) they obtain a sparse tally language in NP ∩ coNP that is hard enough to prevent any many‑one reduction from being length‑increasing. By combining these tools, they build a language L that is Turing‑complete for NP (any NP language reduces to L via polynomial‑time adaptive queries) but cannot be many‑one complete under any polynomial‑time reduction.

Overall, the paper demonstrates that many of the earlier “almost‑everywhere” hardness requirements can be replaced by either worst‑case hardness (for the collapse results) or by a mixture of average‑case and worst‑case hardness (for the separation result). The techniques involve circuit‑complexity lower bounds, hardness amplification, the use of secure permutations to obtain pseudorandomness, and careful encoding of hard instances into length‑increasing reductions. These contributions deepen our understanding of how different completeness notions relate and provide new, arguably more realistic, pathways to separating them.


Comments & Academic Discussion

Loading comments...

Leave a Comment