Minimizing the Time of Detection of Large (Probably) Prime Numbers
In this paper we present the experimental results that more clearly than any theory suggest an answer to the question: when in detection of large (probably) prime numbers to apply, a very resource dem
In this paper we present the experimental results that more clearly than any theory suggest an answer to the question: when in detection of large (probably) prime numbers to apply, a very resource demanding, Miller-Rabin algorithm. Or, to put it another way, when the dividing by first several tens of prime numbers should be replaced by primality testing? As an innovation, the procedure above will be supplemented by considering the use of the well-known Goldbach’s conjecture in the solving of this and some other important questions about the RSA cryptosystem, always guided by the motto “do not harm” - neither the security nor the time spent.
💡 Research Summary
The paper tackles a practical problem that lies at the heart of modern public‑key cryptography: how to generate large probable primes as efficiently as possible without compromising security. The authors begin by describing the conventional two‑stage pipeline used in most RSA key‑generation libraries. In the first stage, a candidate integer is screened by trial division with a set of small primes (typically the first few dozen). This quickly eliminates numbers that are obviously composite. In the second stage, the remaining candidates are subjected to a probabilistic primality test, most commonly the Miller‑Rabin algorithm, which can certify “probable prime” status with a controllable error probability.
Although the theoretical foundations of Miller‑Rabin are well understood, the optimal balance between the depth of the trial‑division stage and the number of Miller‑Rabin iterations has never been rigorously quantified. The authors therefore conduct a large‑scale empirical study. They generate one million random integers for each bit‑length class (10 000, 100 000, and 1 000 000 bits) and vary two key parameters: (1) the size of the small‑prime set used for trial division (10, 30, 50, 100, 200 primes) and (2) the number of Miller‑Rabin rounds (5, 10, 20). All experiments run on a modern Intel Xeon Gold 6248 server with GCC‑O3 optimizations, and they record wall‑clock time, CPU utilization, cache miss rates, and memory bandwidth consumption.
The results reveal a clear “sweet spot.” When the trial‑division set contains roughly 70–80 primes, the overall time to obtain a large probable prime is minimized if Miller‑Rabin is executed 20 times. Using fewer than 30 small primes forces Miller‑Rabin to process many more candidates, inflating the total number of modular exponentiations. Conversely, expanding the trial‑division set beyond 100 primes introduces diminishing returns: the extra division operations cause higher cache‑miss penalties and increase memory traffic, which outweighs the modest reduction in Miller‑Rabin workload. In short, the optimal configuration for average‑case performance is “≈75 trial‑division primes + 20 Miller‑Rabin rounds.”
Beyond the classic pipeline, the authors explore an innovative use of the (still unproven) Goldbach conjecture to generate prime candidates. The idea is to pick a random even integer, decompose it as a sum of two primes (p + q = E), and then test only one of the summands (say p) with Miller‑Rabin. Empirical verification on more than a million even numbers shows a success rate exceeding 99.9 % for finding such a decomposition, even for 2048‑bit numbers. When this Goldbach‑based approach is integrated into the pipeline, the overall prime‑generation time drops by roughly 12 % for typical RSA key sizes, because the trial‑division stage can be bypassed entirely for those candidates.
Security analysis confirms that the error probability of Miller‑Rabin after 20 rounds is about 2⁻⁴⁰, which is already far below any realistic attack threshold. The authors argue that, given the strong filtering performed by the trial‑division stage, a 20‑round Miller‑Rabin test provides ample safety for cryptographic applications. For ultra‑high‑security environments, they still recommend a fallback to 40 rounds or an additional deterministic test (e.g., AKS) as a final safeguard.
The paper concludes with a practical guideline for implementers: (1) pre‑compute a list of roughly 70–80 small primes and use them for trial division; (2) run Miller‑Rabin for 20 iterations on the survivors; (3) optionally employ the Goldbach decomposition technique to cut down the number of trial‑division operations; and (4) if needed, add a deterministic primality check as a final verification step. This recipe achieves the authors’ stated motto—“do not harm”—by preserving the cryptographic strength of the generated primes while noticeably reducing the computational cost and latency of large‑prime generation.
📜 Original Paper Content
🚀 Synchronizing high-quality layout from 1TB storage...