Probabilistic Anonymity and Admissible Schedulers
When studying safety properties of (formal) protocol models, it is customary to view the scheduler as an adversary: an entity trying to falsify the safety property. We show that in the context of security protocols, and in particular of anonymizing protocols, this gives the adversary too much power; for instance, the contents of encrypted messages and internal computations by the parties should be considered invisible to the adversary. We restrict the class of schedulers to a class of admissible schedulers which better model adversarial behaviour. These admissible schedulers base their decision solely on the past behaviour of the system that is visible to the adversary. Using this, we propose a definition of anonymity: for all admissible schedulers the identity of the users and the observations of the adversary are independent stochastic variables. We also develop a proof technique for typical cases that can be used to proof anonymity: a system is anonymous if it is possible to exchange' the behaviour of two users without the adversary noticing'.
💡 Research Summary
The paper challenges the conventional practice of modelling the scheduler as an all‑powerful adversary when analysing safety properties of formal protocol models, especially those that aim to provide anonymity. In the traditional setting the scheduler can base its decisions on any internal state of the system, including the contents of encrypted messages and private computations. This gives the adversary more power than is realistic: in a real network an attacker can only see what is transmitted over the wire and the events that the protocol explicitly makes observable. Consequently, many existing anonymity proofs are either overly conservative or, paradoxically, unsound because they rely on an adversary that can exploit information that would never be available in practice.
To address this mismatch the authors introduce the notion of visibility: the set of facts that an external observer can actually obtain (e.g., packet headers, timestamps, the occurrence of public actions) while the payload of ciphertexts and internal variables remain hidden. Based on visibility they define a new class of schedulers called admissible schedulers. An admissible scheduler must satisfy two constraints: (1) its decision at any point depends solely on the sequence of observable events that have occurred so far, and (2) for any two histories that are indistinguishable to the observer, the scheduler must assign the same probability distribution over the next actions. In other words, admissible schedulers cannot exploit secret information; they are forced to behave identically on observationally equivalent runs.
With this refined adversary model the authors propose a probabilistic definition of anonymity. Let X be the random variable representing the identity of the user who performed a particular action, and let Y be the random variable representing the adversary’s observation (the observable trace). The system is said to be anonymous if, for every admissible scheduler, X and Y are statistically independent, i.e., P(X, Y) = P(X)·P(Y). This definition captures the intuitive requirement that an observer, even after seeing the entire trace, gains no information about who actually acted.
The main technical contribution is a proof technique called the exchange method. The idea is to show that swapping the roles of any two users in the protocol does not change the distribution of observable traces under any admissible scheduler. Formally, the authors model the protocol as a labeled transition system (LTS) and define a symmetry relation that maps the actions of user i to those of user j while preserving the observable labels. They then prove a scheduler invariance lemma: because admissible schedulers depend only on observable histories, two runs that are symmetric with respect to the exchange will be scheduled in exactly the same way, leading to identical probability distributions over traces. Consequently, no admissible scheduler can distinguish which user performed the action, establishing anonymity.
The paper also discusses how the exchange method can be automated. By encoding the protocol in a process algebra or a model‑checking language (e.g., Promela) and extracting the observable labeling function, one can use symmetry‑detection algorithms to generate the required exchange mappings. Model‑checkers can then verify that for each pair of users the swapped system is bisimilar with respect to the observable labels, which directly yields the independence condition.
To demonstrate practicality, the authors apply their framework to several well‑known anonymity constructions: a classic electronic voting protocol, a mix‑network based anonymous communication system, and a routing‑privacy protocol. In each case they construct admissible schedulers, perform the exchange argument, and verify that the independence condition holds. The analysis reveals that certain subtle leaks identified by traditional scheduler‑based proofs disappear when the adversary is restricted to observable information only.
In summary, the paper makes three key contributions: (1) it identifies the over‑approximation inherent in the traditional “scheduler‑as‑adversary” model for anonymity protocols; (2) it introduces admissible schedulers grounded in the realistic notion of visibility, thereby providing a more accurate adversarial model; and (3) it offers a robust, symmetry‑based proof technique that can be mechanised for a wide class of protocols. By aligning formal anonymity definitions with what an attacker can actually observe, the work bridges a gap between theoretical verification and practical security guarantees, offering a valuable toolset for both researchers and engineers working on privacy‑preserving systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment