Generalizing Permissive-Upgrade in Dynamic Information Flow Analysis
Preventing implicit information flows by dynamic program analysis requires coarse approximations that result in false positives, because a dynamic monitor sees only the executed trace of the program. One widely deployed method is the no-sensitive-upgrade check, which terminates a program whenever a variable’s taint is upgraded (made more sensitive) due to a control dependence on tainted data. Although sound, this method is restrictive, e.g., it terminates the program even if the upgraded variable is never used subsequently. To counter this, Austin and Flanagan introduced the permissive-upgrade check, which allows a variable upgrade due to control dependence, but marks the variable “partially-leaked”. The program is stopped later if it tries to use the partially-leaked variable. Permissive-upgrade handles the dead-variable assignment problem and remains sound. However, Austin and Flanagan develop permissive-upgrade only for a two-point (low-high) security lattice and indicate a generalization to pointwise products of such lattices. In this paper, we develop a non-trivial and non-obvious generalization of permissive-upgrade to arbitrary lattices. The key difficulty lies in finding a suitable notion of partial leaks that is both sound and permissive and in developing a suitable definition of memory equivalence that allows an inductive proof of soundness.
💡 Research Summary
The paper addresses a fundamental limitation of dynamic information‑flow control (IFC) mechanisms that aim to prevent implicit leaks by tracking a program’s execution trace. Traditional dynamic IFC relies on the “no‑sensitive‑upgrade” (NSU) check: whenever a variable’s security label is raised because the current program‑counter (pc) is more sensitive, the monitor aborts the program. While NSU guarantees termination‑insensitive non‑interference (TINI), it is overly restrictive. It halts programs even when the upgraded variable is never subsequently used—a situation known as the dead‑variable assignment problem.
Austin and Flanagan previously introduced a more permissive alternative called “permissive‑upgrade”. In a two‑point lattice (L ⊏ H) they added a special label P, meaning “partially‑leaked”. When a variable is upgraded under a high pc, instead of aborting, the monitor assigns the P label to the variable. The program may continue, but any later use of a P‑labeled value (e.g., as a branch condition) triggers a failure. This approach eliminates many unnecessary aborts while still enforcing TINI. However, their formulation was limited to the binary lattice and to pointwise products of such lattices; a generalization to arbitrary security lattices was left open.
The authors of this paper present a non‑trivial extension of permissive‑upgrade to any finite security lattice. The key technical contributions are:
-
A generalized notion of partial leakage.
They introduce a new label P that is not simply a top element but a “potentially‑high” marker. The join operation is redefined so that any join involving P yields P, reflecting that once a value has been influenced by a higher pc it remains potentially high for the rest of the execution. The assignment rule (assn‑PUS) determines the resulting label k as follows:- If pc = L, the rule behaves like NSU (no upgrade).
- If pc = H and the variable’s original label is already H, the label stays H.
- Otherwise (e.g., original label L and pc = H) the variable receives label P.
This captures precisely the situation where a low‑labeled variable is implicitly affected by a high‑sensitivity context but may still be low in alternative executions.
-
A revised memory equivalence relation.
Traditional TINI proofs use A‑equivalence: two labeled values are A‑equivalent if either both labels are ≤ A and the concrete values are equal, or both labels are above A (hence invisible). With P present, the authors extend this relation: any value labeled P is considered indistinguishable from any other value for all adversary levels A. Formally, n₁ᵏ¹ ∼ₐ n₂ᵏ² holds if (k¹ = k² = L and n₁ = n₂) or (k¹ = k² = H) or (k¹ = P) or (k² = P). This definition preserves the inductive step in the non‑interference proof because P‑labeled values never contribute observable differences. -
A soundness proof for arbitrary lattices.
Using the new assignment rule and memory equivalence, the authors prove that the extended monitor satisfies TINI for any security lattice. The proof proceeds by structural induction on the program’s big‑step semantics, showing that if two initial stores are A‑equivalent, then after executing the same command under the same pc, the resulting stores remain A‑equivalent. The crucial lemmas handle the case where a variable becomes P‑labeled and later participates in a branch; the monitor aborts exactly when such a use would violate non‑interference. -
Comparison with the product‑lattice approach.
Austin and Flanagan suggested that a pointwise product of binary lattices could model multi‑level policies, but that construction yields a powerset lattice where P behaves like a top element, leading to unnecessary aborts. The authors demonstrate that their generalized permissive‑upgrade is strictly more permissive on certain lattices: some programs that would be halted under the product construction complete successfully under the new scheme, while still preserving TINI. -
Practical implications and future work.
The generalized mechanism makes permissive‑upgrade applicable to real‑world languages and systems that employ rich security lattices (e.g., role‑based, compartmentalized, or multilevel confidentiality/integrity models). The paper suggests extensions such as handling de‑classification of P‑labels, integrating with static analyses for hybrid enforcement, and exploring concurrency where multiple pc values may interact.
In summary, the paper delivers a rigorous, lattice‑agnostic formulation of permissive‑upgrade, resolves the long‑standing open problem of defining partial leaks for arbitrary security policies, and provides a soundness proof that retains the desirable permissiveness of the original two‑point technique while broadening its applicability to complex, real‑world security lattices.
Comments & Academic Discussion
Loading comments...
Leave a Comment