Simulations for Deep Random Secrecy Protocol
📝 Abstract
We present numerical simulations measuring secrecy and efficiency rate of Perfect Secrecy protocol presented in former article named Perfect Secrecy under Deep Random assumption. Those simulations specifically measure the respective error rates of both legitimate partner and eavesdropper experimented during the exchange of a data flow through the protocol. Those measured error rates also enable us to estimate a lower bound of the Crytpologic Limit introduced in article named Perfect Secrecy under Deep Random assumption. We discuss the variation of the protocol parameters and their impact on the measured performance.
💡 Analysis
We present numerical simulations measuring secrecy and efficiency rate of Perfect Secrecy protocol presented in former article named Perfect Secrecy under Deep Random assumption. Those simulations specifically measure the respective error rates of both legitimate partner and eavesdropper experimented during the exchange of a data flow through the protocol. Those measured error rates also enable us to estimate a lower bound of the Crytpologic Limit introduced in article named Perfect Secrecy under Deep Random assumption. We discuss the variation of the protocol parameters and their impact on the measured performance.
📄 Content
(*) See contact and information about the author at last page
Simulations for Deep Random Secrecy Protocol
Thibault de Valroger (*)
Abstract We present numerical simulations measuring secrecy and efficiency rate of Information Theoretically Secure protocol based on Deep Random assumption presented in former article [9]. Those simulations specifically measure the respective error rates of both legitimate partner and eavesdropper experimented during the exchange of a data flow through the protocol. The measurements of the error rates also enable us to estimate a lower bound of the Cryptologic Limit introduced in [9]. We discuss the variation of the protocol’s parameters and their impact on the measured performance.
Key words. Perfect secrecy, Deep Random, Advantage Distillation, Privacy Amplification, secret key agreement, unconditional security, quantum resistant I. Introduction and summary of former work Modern cryptography mostly relies on mathematical problems commonly trusted as very difficult to solve, such as large integer factorization or discrete logarithm, belonging to complexity theory. No certainty exists on the actual difficulty of those problems. Some other methods, rather based on information theory, have been developed since early 90’s. Those methods relies on hypothesis about the opponent (such as « memory bounded » adversary [6]) or about the communication channel (such as « independent noisy channels » [5]) ; unfortunately, if their perfect secrecy have been proven under given hypothesis, none of those hypothesis are easy to ensure in practice. At last, some other methods based on physical theories like quantum indetermination [3] or chaos generation have been described and experimented, but they are complex to implement, and, again, relies on solid but not proven and still partly understood theories. Considering this theoretically unsatisfying situation, we have proposed in [9] to explore a new path, where proven information theoretic security can be reached, without assuming any limitation about the opponent, who is supposed to have unlimited calculation and storage power, nor about the communication channel, that is supposed to be perfectly public, accessible and equivalent for any playing party (legitimate partners and opponents). In our model of security, the legitimate partners of the protocol are using Deep Random generation to generate their secret information, and the behavior of the opponent, when inferring from public information, is governed by Deep Random assumption, that we introduce.
Back on the Deep random assumption
We have introduced in [9] the Deep Random assumption, based on Prior Probability theory as
developed by Jaynes [7]. Deep Random assumption is an objective principle to assign probability,
compatible with the symmetry principle proposed by Jaynes [7].
Before presenting the Deep Random assumption, it is needed to introduce Prior probability theory.
If we denote the set of all prior information available to observer regarding the probability
distribution of a certain random variable (‘prior’ meaning before having observed any experiment of
that variable), and any public information available regarding an experiment of , it is then
possible to define the set of possible distributions that are compatible with the information
regarding an experiment of ; we denote this set of possible distributions as:
The goal of Prior probability theory is to provide tools enabling to make rigorous inference reasoning
in a context of partial knowledge of probability distributions. A key idea for that purpose is to consider
groups of transformation, applicable to the sample space of a random variable , that do not change
the global perception of the observer. In other words, for any transformation of such group, the
observer has no information enabling him to privilege ( ) ( | ) rather than ( )
( ( )| ) as the actual conditional distribution. This idea has been developed by Jaynes [7].
We will consider only finite groups of transformation, because one manipulates only discrete and
bounded objects in digital communications. We define the acceptable groups as the ones fulfilling
the 2 conditions below:
( ) Stability - For any distribution , and for any transformation , then
( ) Convexity - Any distribution that is invariant by action of does belong to
It can be noted that the set of distributions that are invariant by action of is exactly:
( ) {
| | ∑
| } For any group of transformations applying on the sample space , we denote by ( ) the set of all possible conditional expectations when the distribution of courses ( ). In other words: ( ) { ( ) [ | ]| ( )} Or also: ( ) { ( ) ∫ ( )
| ( )} The Deep Random assumption prescribes that, if , the strategy of the opponent observer , in orde
This content is AI-processed based on ArXiv data.