A User Model for Information Erasure
Hunt and Sands (ESOP'08) studied a notion of information erasure for systems which receive secrets intended for limited-time use. Erasure demands that once a secret has fulfilled its purpose the subsequent behaviour of the system should reveal no information about the erased data. In this paper we address a shortcoming in that work: for erasure to be possible the user who provides data must also play his part, but previously that role was only specified informally. Here we provide a formal model of the user and a collection of requirements called erasure friendliness. We prove that an erasure-friendly user can be composed with an erasing system (in the sense of Hunt and Sands) to obtain a combined system which is jointly erasing in an appropriate sense. In doing so we identify stronger requirements on the user than those informally described in the previous work.
💡 Research Summary
Hunt and Sands introduced the notion of information erasure for systems that receive secrets intended for limited‑time use, requiring that after the secret’s purpose has been fulfilled the system’s subsequent behaviour reveal no information about that data. Their model, however, left the user’s responsibilities informal, creating a gap in the overall security argument. This paper fills that gap by providing a rigorous formal model of the user and defining a set of constraints called “erasure‑friendliness.”
Erasure‑friendliness comprises four main requirements. First, the user must supply the secret consistently until the system explicitly requests erasure, and must never reuse or resend the secret after the erasure request (Non‑Reuse). Second, the system must send an explicit acknowledgement after performing the erasure transition; the user must wait for this signal before issuing any further secret‑related input (Acknowledgement). Third, the user must follow a predefined fault‑tolerance protocol if the system encounters errors during erasure, ensuring that no secret leaks even in abnormal termination (Fault‑Tolerance). Fourth, after erasure the user must purge any metadata (timestamps, hashes, identifiers) that could be correlated with the secret (Metadata Cleanliness).
To capture these constraints, the authors model both the system and the user as labelled transition systems (LTS) or I/O‑automata. Communication occurs over a channel marked with an “erase” label. An erasure‑friendly user, upon observing the erase label, immediately transitions to an idle state and thereafter performs only non‑secret‑related actions. The combined system is represented as a product of the two automata, with states (system‑state, user‑state). The key theorem proves that if the system satisfies the original Hunt‑Sands erasure property and the user satisfies erasure‑friendliness, then the product system satisfies a joint erasure property: every observable trace after the erase transition is independent of the erased secret. The proof relies on an invariant that is preserved by the composition operator; after the erase transition both components are in “post‑erase” states that forbid any secret‑dependent transitions, guaranteeing observational equivalence of traces regardless of the secret’s value.
By formalising the user’s role, the paper strengthens the original erasure model, making it applicable to real‑world protocols where both parties must cooperate to achieve secure deletion. The work also opens avenues for automated verification, as the combined LTS can be fed to model‑checking tools. Future directions suggested include extending the framework to multi‑user scenarios, handling concurrent erasure requests, and integrating cryptographic primitives to support more complex privacy‑preserving applications.
Comments & Academic Discussion
Loading comments...
Leave a Comment