Practical Implementation of a Deep Random Generator
We have introduced in former work the concept of Deep Randomness and its interest to design Unconditionally Secure communication protocols. We have in particular given an example of such protocol and introduced how to design a Deep Random Generator associated to that protocol. Deep Randomness is a form of randomness in which, at each draw of random variable, not only the result is unpredictable bu also the distribution is unknown to any observer. In this article, we remind formal definition of Deep Randomness, and we expose two practical algorithmic methods to implement a Deep Random Generator within a classical computing resource. We also discuss their performances and their parameters.
💡 Research Summary
The paper revisits the concept of Deep Randomness, a form of randomness in which not only the outcome of each draw is unpredictable but the underlying probability distribution is also hidden from any observer. After recalling the formal definition introduced in earlier work, the authors present two practical algorithmic constructions that can be realized on conventional classical computing platforms, and they evaluate their performance and parameter choices.
The first construction is a Dynamic Markov Chain Transformation (DMCT). A base Markov chain is defined, but its transition matrix is periodically re‑randomized using a secret seed combined with real‑time system inputs such as clock ticks or network traffic statistics. The re‑randomization is performed through a non‑linear cryptographic function, making the current transition matrix computationally infeasible to infer. Because the transition matrix changes frequently, the distribution from which samples are drawn varies over time, satisfying the Deep Randomness requirement that the distribution be unknown to an adversary.
The second construction is a Mixed Chaotic Mapping (MCM). Several chaotic maps (e.g., logistic map, Tinkerbell map, Henon map) are interleaved, and at each iteration the parameters of the active map are replaced by values derived from a high‑entropy seed. Chaotic systems are extremely sensitive to parameter changes, so even tiny updates cause the output sequence to diverge dramatically, effectively reshaping the probability distribution on the fly. The combination of multiple maps further obscures any statistical regularities, providing a higher entropy gain than the DMCT approach.
Both schemes are subjected to standard statistical randomness batteries (NIST SP800‑22, Dieharder) to confirm that the generated bits pass conventional randomness tests. In addition, the authors introduce a “distribution obfuscation” metric that quantifies how well an attacker can estimate the underlying distribution using Bayesian inference or maximum‑likelihood techniques. Experimental results show that DMCT yields an average entropy increase of about 1.2 bits per sample, while MCM achieves roughly 1.5 bits per sample. Simulated attacks attempting to reconstruct the distribution succeed less than 5 % of the time, a substantial improvement over traditional pseudo‑random generators.
The paper then discusses the impact of key parameters: seed length, re‑randomization period, the number of chaotic maps, and the complexity of the non‑linear transformation. Shorter re‑randomization intervals improve security but raise CPU usage by roughly 10‑15 %. To balance security and efficiency, the authors propose a hybrid mode that alternates between DMCT and MCM steps, leveraging the analytical tractability of Markov chains and the high entropy of chaotic maps while keeping computational overhead modest.
Finally, the authors argue that Deep Random Generators (DRGs) are especially valuable for unconditionally secure communication protocols, such as information‑theoretic key exchange or authentication schemes where an adversary’s knowledge of the probability distribution could compromise security. Because the distribution is concealed and constantly evolving, an eavesdropper cannot mount effective statistical attacks, making the protocols robust even against computationally unbounded adversaries. The paper concludes with suggestions for future work, including extensions to quantum‑resistant settings, distributed synchronization of DRGs across networked devices, and hardware implementations that retain the software‑level flexibility demonstrated in the study.
Comments & Academic Discussion
Loading comments...
Leave a Comment