On the computational complexity of finding hard tautologies

On the computational complexity of finding hard tautologies
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

It is well-known (cf. K.-Pudl'ak 1989) that a polynomial time algorithm finding tautologies hard for a propositional proof system $P$ exists iff $P$ is not optimal. Such an algorithm takes $1^{(k)}$ and outputs a tautology $\tau_k$ of size at least $k$ such that $P$ is not p-bounded on the set of all $\tau_k$’s. We consider two more general search problems involving finding a hard formula, {\bf Cert} and {\bf Find}, motivated by two hypothetical situations: that one can prove that $\np \neq co\np$ and that no optimal proof system exists. In {\bf Cert} one is asked to find a witness that a given non-deterministic circuit with $k$ inputs does not define $TAUT \cap \kk$. In {\bf Find}, given $1^{(k)}$ and a tautology $\alpha$ of size at most $k^{c_0}$, one should output a size $k$ tautology $\beta$ that has no size $k^{c_1}$ $P$-proof from substitution instances of $\alpha$. We shall prove, assuming the existence of an exponentially hard one-way permutation, that {\bf Cert} cannot be solved by a time $2^{O(k)}$ algorithm. Using a stronger hypothesis about the proof complexity of Nisan-Wigderson generator we show that both problems {\bf Cert} and {\bf Find} are actually only partially defined for infinitely many $k$ (i.e. there are inputs corresponding to $k$ for which the problem has no solution). The results are based on interpreting the Nisan-Wigderson generator as a proof system.


💡 Research Summary

The paper investigates the algorithmic difficulty of two search problems that are natural extensions of the classic task of producing hard tautologies for a propositional proof system P. The motivation stems from two hypothetical scenarios: (i) a proof that NP ≠ coNP, and (ii) the non‑existence of an optimal proof system. The authors define:

  • Cert – Given a nondeterministic Boolean circuit C with k input bits, the task is to produce a concrete witness that C fails to decide the language TAUT ∩ 𝕜 (the set of all tautologies of size k). In other words, one must exhibit either an input on which C outputs the wrong answer or a structural reason why C cannot compute the characteristic function of TAUT ∩ 𝕜.

  • Find – Given the unary string 1^{(k)} and a tautology α whose size is bounded by k^{c₀}, the goal is to output a new tautology β of size exactly k such that no P‑proof of β of size ≤ k^{c₁} can be derived from substitution instances of α. This problem asks for a “hard” tautology that remains hard even when the proof system is allowed to use α as an axiom schema.

The paper’s technical core rests on two cryptographic/complexity assumptions:

  1. Existence of an exponentially hard one‑way permutation (OWP). This is a bijection f on {0,1}ⁿ that is easy to compute but any probabilistic polynomial‑time algorithm inverts f only with probability 2^{-Ω(n)}. The OWP provides a source of computational hardness that can be amplified to the level of proof‑complexity.

  2. A strong proof‑complexity hypothesis for the Nisan‑Wigderson (NW) generator. The NW generator G maps a short seed s (ℓ = polylog n) to a long string G(s) ∈ {0,1}ⁿ. The hypothesis asserts that for a fixed propositional proof system P, no polynomial‑size P‑proof can certify that G(s) is false (i.e., that G(s) does not encode a tautology) for any seed s. In other words, the outputs of G have high “proof‑complexity” with respect to P.

Using the OWP, the authors show that any algorithm running in time 2^{O(k)} cannot solve Cert. The argument proceeds by embedding the behavior of the given circuit C into the NW generator’s output. If C were to correctly decide TAUT ∩ 𝕜, then the NW generator would produce strings that are provably false, contradicting the assumed hardness of inverting the OWP. Consequently, a short (exponential‑time) algorithm cannot reliably produce a counter‑example to C, establishing an exponential lower bound for Cert.

The second, stronger hypothesis about the NW generator yields a more striking result: both Cert and Find are partially undefined for infinitely many values of k. That is, there exist infinitely many input lengths for which no valid witness (for Cert) or no suitable hard tautology (for Find) exists at all. The proof interprets the NW generator itself as a proof system: each seed s acts as a “proof” that the generated string is a tautology. Under the high‑proof‑complexity assumption, certain seeds cannot be refuted by any short P‑proof, which translates into the non‑existence of the required objects for the search problems. In the case of Find, even when a small tautology α is supplied, the substitution closure of α fails to generate a size‑k tautology that evades short P‑proofs, because the NW generator’s outputs already saturate the space of hard formulas.

The paper therefore establishes two complementary impossibility statements:

  • Exponential‑time barrier for Cert: Assuming an exponentially hard OWP, no algorithm running in time 2^{O(k)} can, for all k, produce a correct counter‑example to a given nondeterministic circuit’s claim to decide TAUT ∩ 𝕜.

  • Partial undefinedness for Cert and Find: Assuming the NW generator’s outputs have high proof‑complexity for a fixed proof system P, there are infinitely many k for which the search problems have no solution at all.

These results deepen the connection between proof‑complexity, cryptographic hardness, and the meta‑question of optimal proof systems. By treating the NW generator as a proof system, the authors provide a novel perspective that bridges derandomization techniques with propositional proof theory. The work suggests that, unless we can break widely believed cryptographic assumptions, the quest for uniformly hard tautologies (or efficient certificates of circuit failure) is fundamentally limited. Moreover, it highlights that the non‑existence of an optimal proof system would have concrete algorithmic consequences: certain natural search problems would be ill‑posed for infinitely many input sizes.

In the broader context, the paper contributes to the ongoing program of relating lower bounds in proof complexity to separations such as NP ≠ coNP and to the structural landscape of propositional proof systems. It also opens avenues for future research: investigating whether weaker assumptions (e.g., average‑case hardness of one‑way functions) suffice for similar barriers, or exploring other generators (e.g., hardness‑vs‑randomness constructions) as potential proof systems. The methodology of interpreting combinatorial generators as logical proof systems may prove fruitful for further advances at the intersection of complexity theory, cryptography, and proof complexity.


Comments & Academic Discussion

Loading comments...

Leave a Comment