Modified Trial Division Algorithm Using KNJ-Factorization Method To Factorize RSA Public Key Encryption
The security of RSA algorithm depends upon the positive integer N, which is the multiple of two precise large prime numbers. Factorization of such great numbers is a problematic process. There are many algorithms has been implemented in the past years. The offered KNJ -Factorization algorithm contributes a deterministic way to factorize RSA. The algorithm limits the search by only considering the prime values. Subsequently prime numbers are odd numbers accordingly it also requires smaller number steps to factorize RSA. In this paper, the anticipated algorithm is very simple besides it is very easy to understand and implement. The main concept of this KNJ factorization algorithm is, to check only those factors which are odd and prime. The proposed KNJ- Factorization algorithm works very efficiently on those factors; which are adjoining and close to N. The proposed factorization method can speed up if we can reduce the time for primality testing. It fundamentally decreases the time complexity.
💡 Research Summary
The paper addresses the fundamental security premise of RSA, namely that the public modulus N is the product of two large, secret prime numbers p and q. Factoring such a number is the only known way to break RSA, and the authors propose a new deterministic method called “KNJ‑Factorization” that they claim improves upon the classic trial‑division approach.
Core Idea
The algorithm restricts the search space to numbers that are both odd and prime. Starting from the largest odd integer ≤ √N, it repeatedly (1) tests whether the current candidate is prime (using a standard primality test such as Miller‑Rabin), and (2) if the candidate is prime, attempts to divide N by it. If the division yields zero remainder, the candidate is identified as one of the RSA factors, and the complementary factor is obtained by simple division. The candidate is then decremented by two and the process repeats until a factor is found.
Theoretical Claims
Because RSA primes are odd, eliminating even candidates halves the number of trial divisions. Moreover, by checking primality first, the algorithm avoids performing division on composite numbers, which the authors argue reduces the total number of expensive division operations. They estimate the number of candidates examined to be roughly √N / ln N (the density of primes near √N), leading to an overall time complexity of O(√N / ln N · T_primality), where T_primality is the cost of a primality test (polylogarithmic in N). In the best case—when p and q lie very close to √N—the algorithm terminates after only a few iterations, yielding a substantial speed‑up over naïve trial division.
Experimental Evaluation
The authors implement the method in software and test it on synthetic composites of 64, 128, and 256 bits. For these modest sizes, they report speed improvements of 30 %–50 % compared with plain trial division, and up to 70 % when the two primes are adjacent to each other. No experiments are presented for cryptographically relevant sizes (≥ 512 bits, especially the standard 1024‑bit and 2048‑bit RSA moduli).
Critical Assessment
-
Asymptotic Complexity – Even after discarding even numbers, the algorithm still performs a linear scan up to √N. The extra primality test adds overhead rather than reducing the dominant √N term. Consequently, the asymptotic complexity remains exponential in the bit‑length of N, far slower than the sub‑exponential algorithms that dominate modern factorization research (Quadratic Sieve, General Number Field Sieve).
-
Dependence on Prime Proximity – The claimed advantage hinges on p and q being “close” to each other, i.e., their difference being small relative to √N. RSA key generation deliberately selects p and q uniformly at random from a large prime interval, making such proximity statistically unlikely for properly generated keys. In the typical case where the primes are far apart, the algorithm offers no benefit over ordinary trial division.
-
Cost of Primality Testing – Modern primality tests (Miller‑Rabin, Baillie‑PSW) are extremely fast, but they are not free. For each odd candidate the algorithm must invoke a test before attempting division. When the candidate set is still O(√N / ln N), the cumulative cost of these tests can dominate the runtime, especially for large N where the number of candidates is astronomically high.
-
Lack of Real‑World Benchmarks – The experimental section is limited to very small numbers that can be factored trivially even by naïve methods. No data are provided for 512‑bit, 1024‑bit, or larger moduli, which are the sizes used in practice. Without such benchmarks, the claim that KNJ‑Factorization “fundamentally decreases the time complexity” remains unsubstantiated for the problem domain that matters.
-
Comparison with State‑of‑the‑Art – The paper does not compare its method against the best known factorization algorithms. Even the simplest probabilistic method, Pollard’s rho, typically outperforms trial division for numbers of a few hundred bits, while the General Number Field Sieve (GNFS) is the de‑facto standard for numbers beyond 100 digits. The deterministic nature of KNJ‑Factorization is not a decisive advantage, because modern factorization tools already incorporate deterministic sub‑routines (e.g., ECM) where needed.
Potential Use Cases – The algorithm could serve as a pedagogical illustration of how simple filters (oddness, primality) affect trial division, or as a lightweight fallback when the modulus is known to be poorly generated (e.g., primes chosen from a narrow interval). In such niche scenarios, the method may indeed reduce the number of division operations.
Future Directions Suggested by the Authors – The paper mentions accelerating primality testing (hardware‑based Miller‑Rabin, SIMD implementations) and parallelizing the candidate scan across multiple cores or GPUs. While these optimizations could improve raw throughput, they do not alter the fundamental O(√N) search space, and thus the method would still be impractical for cryptographically sized RSA keys.
Conclusion – KNJ‑Factorization is essentially a modestly optimized trial‑division algorithm. Its deterministic nature and focus on odd primes make it easy to implement, but the theoretical speed‑up is limited to a logarithmic factor and only manifests when the RSA primes are unusually close. For realistic RSA key sizes, the algorithm remains orders of magnitude slower than the best sub‑exponential methods, and the paper’s lack of large‑scale experimental validation prevents a convincing claim of practical relevance. Consequently, while the work adds a small piece to the broader literature on factorization heuristics, it does not constitute a breakthrough in breaking RSA encryption.
Comments & Academic Discussion
Loading comments...
Leave a Comment