A polytime proof of correctness of the Rabin-Miller algorithm from Fermats little theorem

A polytime proof of correctness of the Rabin-Miller algorithm from   Fermats little theorem
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Although a deterministic polytime algorithm for primality testing is now known, the Rabin-Miller randomized test of primality continues being the most efficient and widely used algorithm. We prove the correctness of the Rabin-Miller algorithm in the theory V1 for polynomial time reasoning, from Fermat’s little theorem. This is interesting because the Rabin-Miller algorithm is a polytime randomized algorithm, which runs in the class RP (i.e., the class of polytime Monte-Carlo algorithms), with a sampling space exponential in the length of the binary encoding of the input number. (The class RP contains polytime P.) However, we show how to express the correctness in the language of V1, and we also show that we can prove the formula expressing correctness with polytime reasoning from Fermat’s Little theorem, which is generally expected to be independent of V1. Our proof is also conceptually very basic in the sense that we use the extended Euclid’s algorithm, for computing greatest common divisors, as the main workhorse of the proof. For example, we make do without proving the Chinese Reminder theorem, which is used in the standard proofs.


💡 Research Summary

The paper investigates the correctness of the Rabin‑Miller primality test within the framework of bounded‑arithmetic theory V1, which captures polynomial‑time reasoning. Although deterministic polynomial‑time primality testing is known, the Rabin‑Miller algorithm remains the most practical due to its simplicity and speed. The authors aim to formalize the algorithm’s correctness in V1, starting from Fermat’s Little Theorem, and to show that this formalization can be carried out using only ΣB₁ formulas, the extended Euclidean algorithm, and basic number‑theoretic facts, without invoking stronger results such as the Chinese Remainder Theorem.

The paper begins with a concise overview of V1: a two‑sorted theory whose first sort handles natural numbers (indices) and whose second sort handles binary strings (encodings of numbers). V1 permits bounded quantifiers, ΣB₀ and ΣB₁ formulas, and includes comprehension, induction, and minimization schemes for these formulas. The crucial meta‑theorem (Theorem 2.1) states that a function is polynomial‑time computable iff it is definable by a ΣB₁ formula provable in V1. This provides the logical setting for expressing algorithmic properties.

Next, the authors recall essential number‑theoretic tools. They present the extended Euclidean algorithm, prove its correctness within V1, and use it to generate Bézout coefficients satisfying ax + by = gcd(a,b). Euler’s theorem and Lagrange’s theorem are discussed, but the paper notes that their proofs are not known to be formalizable in V1. Fermat’s Little Theorem (FLT) is taken as an external assumption: while it is provable in stronger theories, it is believed to be independent of V1. The authors instead work with the contrapositive form of FLT, which can be expressed as a ΣB₁ statement: for any composite n that is not a Carmichael number, at most half of the elements a∈Zₙ* satisfy a^{n‑1}≡1 (mod n).

Using this observation, the paper defines pseudoprimes (primes or Carmichael numbers) and shows that a simple randomized test—checking a^{n‑1}≡1 for a random a∈Zₙ⁺—fails with probability at most ½ on any non‑Carmichael composite. This motivates the Rabin‑Miller test, which refines the basic Fermat test by examining the sequence a^{s·2^i} where n‑1 = s·2^h with s odd. The algorithm rejects if the initial Fermat test fails, or if in the sequence the first non‑1 element is not –1 (mod n). All steps are implementable in polynomial time via repeated squaring.

The core of the correctness proof is a counting argument formalized in V1. The authors define “witnesses of compositeness” (type 1 and type 2) and construct a mapping from the set of non‑witnesses to witnesses using multiplication modulo n. The extended Euclidean algorithm guarantees that this mapping is surjective, implying that at least half of the possible bases a are witnesses. This statement is expressed as a ΣB₁ formula asserting that the size of the set of bad bases is at most half the size of Zₙ*. Because V1 can reason about ΣB₁ formulas, it can prove the correctness of the Rabin‑Miller algorithm: the probability of a false positive is ≤ ½, and by standard amplification the error can be reduced exponentially.

Finally, the paper discusses the logical significance of the result. If V1 could prove FLT, the same machinery would yield a polynomial‑time factoring algorithm, contradicting widely held beliefs about the hardness of integer factorization and the security of RSA. Hence, the dependence on FLT highlights a precise boundary: Rabin‑Miller’s correctness is provable in V1 only when FLT is taken as an external axiom. The work thus clarifies which number‑theoretic principles are essential for the algorithm’s soundness and demonstrates how a randomized RP algorithm can be analyzed within a deterministic polynomial‑time logical framework.


Comments & Academic Discussion

Loading comments...

Leave a Comment