Are random axioms useful?

Are random axioms useful?
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

The famous G"odel incompleteness theorem says that for every sufficiently rich formal theory (containing formal arithmetic in some natural sense) there exist true unprovable statements. Such statements would be natural candidates for being added as axioms, but where can we obtain them? One classical (and well studied) approach is to add (to some theory T) an axiom that claims the consistency of T. In this note we discuss the other one (motivated by Chaitin’s version of the G"odel theorem) and show that it is not really useful (in the sense that it does not help us to prove new interesting theorems), at least if we are not limiting the proof complexity. We discuss also some related questions.


💡 Research Summary

**
The paper investigates the idea of enriching a formal arithmetic theory by adding “random axioms” – statements asserting that a randomly chosen string has high Kolmogorov complexity. The motivation comes from Chaitin’s version of Gödel’s incompleteness theorem, which shows that for any sufficiently strong theory there exist true statements of the form “C(x) > n” that are unprovable within the theory. The author asks whether, by taking a random string x of length N and adding the axiom “C(x) ≥ N − 1000” (or similar), we can obtain a stronger theory that helps prove new, interesting theorems, while being practically certain that the added axiom is true.

The paper proceeds in several steps:

  1. Soundness (Proposition 1).
    The author formalises a probabilistic proof‑search procedure that may at each step toss a fair coin to generate a random string and then adopt the corresponding statement R(r) as an axiom, provided the fraction of “bad” strings (those violating R) is bounded by a pre‑specified δ. An initial “budget” ε limits the total allowed error probability. By a backward‑induction argument on the proof‑strategy tree, it is shown that if the overall probability of reaching a target formula F exceeds ε, then F must be true. Hence the method is sound: the chance of proving a false statement is bounded by the chosen ε.

  2. Conservativity (Proposition 2).
    The author strengthens the previous result: if a probabilistic proof strategy can reach a formula F with probability greater than ε, then F is already provable in the original (non‑extended) theory. The proof introduces the notion of a “strong” node in the strategy tree—one from which F follows from the axioms accumulated so far. By analysing the fraction of strong children at each branching point and using the given bound on the number of bad strings, the author shows that any node with success probability exceeding its remaining budget must be strong. Consequently, any formula that can be obtained with probability > ε is not genuinely new; it is already derivable without random axioms.

  3. Proof‑size considerations (Proposition 3).
    The transformation from a probabilistic proof to a deterministic one in the previous argument incurs an exponential blow‑up in proof length, because one must combine proofs for all disjuncts in a large disjunction. The author asks whether a polynomial‑size transformation might exist. He shows that if such a transformation were possible, then PSPACE would collapse to NP. The argument uses the standard PSPACE‑complete language TQBF and its interactive proof (Arthur–Merlin) system: the interactive protocol can be simulated by a probabilistic proof strategy of polynomial length. If this could be turned into a polynomial‑size ordinary proof, the witness would place TQBF in NP, implying PSPACE = NP, which is widely believed false. Therefore, efficient derandomisation of these proofs is unlikely.

  4. Adding full Kolmogorov‑complexity information (Proposition 4).
    The author also examines the extreme case where one adds, for every string x, the exact equality “C(x) = k” as an axiom (non‑constructively). He shows that this theory is equivalent to the original arithmetic theory augmented with all true universal statements (∀x φ(x)). The reasoning is that statements of the form C(x) > n are universal (they assert that no program of length ≤ n outputs x), while C(x) ≤ n are existential and already provable when true. Hence, having complete complexity data gives precisely the same power as having all true universal sentences.

  5. Open questions.
    The paper ends with a discussion of weaker ways to add complexity information, such as axioms of the form C(x) ≥ n − O(1) for “most” strings of length n, and asks whether such partial enrichments could keep certain fixed statements unprovable while still providing useful new theorems.

Overall conclusion:
Random axioms based on high Kolmogorov complexity are logically safe (they rarely introduce false statements), but they do not yield genuinely new provable theorems; any statement that can be proved with non‑negligible probability is already provable in the original theory. Moreover, if one cares about proof length, converting probabilistic proofs into ordinary ones would cause a collapse of major complexity classes, suggesting that the approach cannot improve proof efficiency either. Adding complete complexity information merely reproduces the effect of adding all true universal sentences, offering no substantive mathematical advantage. Consequently, while the idea is attractive, it does not provide a practical method for extending formal theories in a way that leads to new, interesting mathematics.


Comments & Academic Discussion

Loading comments...

Leave a Comment