We have formerly introduced Deep Random Secrecy, a new cryptologic technique capable to ensure secrecy as close as desired from perfection against unlimited passive eavesdropping opponents. We have also formerly introduced an extended protocol, based on Deep Random Secrecy, capable to resist to unlimited active MITM. The main limitation of those protocols, in their initial presented version, is the important quantity of information that needs to be exchanged between the legitimate partners to distill secure digits. We have defined and shown existence of an absolute constant, called Cryptologic Limit, which represents the upper-bound of Secrecy rate that can be reached by Deep Random Secrecy protocols. At last, we have already presented practical algorithms to generate Deep Randomness from classical computing resources. This article is presenting an optimization technique, based on recombination and reuse of random bits; this technique enables to dramatically increase the bandwidth performance of formerly introduced protocols, without jeopardizing the entropy of secret information. That optimization enables to envision an implementation of Deep Random Secrecy at very reasonable cost. The article also summarizes former results in the perspective of a comprehensive implementation.
Modern cryptography mostly relies on mathematical problems commonly trusted as very difficult to solve, such as large integer factorization or discrete logarithm, belonging to complexity theory. No certainty exists on the actual difficulty of those problems. Some other methods, rather based on information theory, have been developed since early 90's. Those methods relies on hypothesis about the opponent (such as « memory bounded » adversary [6]) or about the communication channel (such as « independent noisy channels » [5]) ; unfortunately, if their perfect secrecy have been proven under given hypothesis, none of those hypothesis are easy to ensure in practice. At last, some other methods based on physical theories like quantum indetermination [3] have been described and experimented, but they remain complex to implement.
Considering this theoretically unsatisfying situation, we have proposed in [9] to explore a new path, where proven information theoretic security can be reached, without assuming any limitation about the capacities of the opponent, who is supposed to have unlimited computation and storage power, nor about the communication channel, that is supposed to be perfectly public, accessible and equivalent for any playing party (legitimate partners and opponents). Furthermore, while we were only considering passive unlimited opponents in [9], we consider in this work active unlimited MITM opponents.
In our model of security, the legitimate partners of the protocol are using Deep Random generation to generate their shared encryption key, and the behavior of the opponent, when inferring secret information from public information, is governed by Deep Random assumption, that we introduce. In active opponent scenarios, the legitimate partners have an initial shared authentication secret, that is used only for authentication purpose, not for generating the shared encryption key.
We have introduced in [9] the Deep Random assumption, based on Prior Probability theory as developed by Jaynes [7]. Deep Random assumption is an objective principle to assign probability, compatible with the symmetry principle proposed by Jaynes [7].
Before presenting the Deep Random assumption, it is needed to introduce Prior probability theory.
the set of all prior information available to observer regarding the probability distribution of a certain random variable (‘prior’ meaning before having observed any experiment of that variable), and any public information available regarding an experiment of , it is then possible to define the set of possible distributions that are compatible with the information regarding an experiment of ; we denote this set of possible distributions as:
The goal of Prior probability theory is to provide tools enabling to make rigorous inference reasoning in a context of partial knowledge of probability distributions. A key idea for that purpose is to consider groups of transformation, applicable to the sample space of a random variable , that do not change the global perception of the observer. In other words, for any transformation of such group, the observer has no information enabling him to privilege ( ) ( | ) rather than ( ) ( ( )| ) as the actual conditional distribution. This idea has been developed by Jaynes [7].
We will consider only finite groups of transformation, because one manipulates only discrete and bounded objects in digital communications. We define the acceptable groups as the ones fulfilling the 2 conditions below:
( ) Stability -For any distribution , and for any transformation , then ( ) Convexity -Any distribution that is invariant by action of does belong to
It can be noted that the set of distributions that are invariant by action of is exactly:
The set of acceptable groups as defined above is denoted:
For any group of transformations applying on the sample space , we denote by ( ) the set of all possible conditional expectations when the distribution of courses ( ). In other words:
Or also:
The Deep Random assumption prescribes that, if , the strategy of the opponent observer , in order to estimate from the public information , should be chosen by the opponent observer within the restricted set of strategies:
The Deep Random assumption can thus be seen as a way to restrict the possibilities of to choose his strategy in order estimate the private information from his knowledge of the public information . It is a fully reasonable assumption because the assigned prior distribution should remain stable by action of a transformation that let the distribution uncertainty unchanged.
( ) suggests of course that should eventually be picked in ⋂ ( ) , but it is enough for our purpose to find at least one group of transformation with which one can apply efficiently the Deep Random assumption to the a protocol in order to measure an advantage distilled by the legitimate partners compared to the opponent.
The following protocol has been presented in [9]. In order to sho
This content is AI-processed based on open access ArXiv data.