Liberalizing Dependency
The dependency core calculus (DCC), a simple extension of the computational lambda calculus, captures a common notion of dependency that arises in many programming language settings. This notion of dependency is closely related to the notion of information flow in security; it is sensitive not only to data dependencies that cause explicit flows, but also to control dependencies that cause implicit flows. In this paper, we study variants of DCC in which the data and control dependencies are decoupled. This allows us to consider settings where a weaker notion of dependency—one that restricts only explicit flows—may usefully coexist with DCC’s stronger notion of dependency. In particular, we show how strong, noninterference-based security may be reconciled with weak, trace-based security within the same system, enhancing soundness of the latter and completeness of the former.
💡 Research Summary
The paper revisits the Dependency Core Calculus (DCC), a foundational formalism that captures both explicit (data) and implicit (control) information flows, and proposes a systematic separation of these two kinds of dependencies. Traditional DCC treats data and control dependencies as a single security lattice, enforcing a strong non‑interference property that blocks any flow from a high‑security level to a low‑security level, whether the flow is caused by a direct data movement or by a control decision. While this model is mathematically elegant, it is often too restrictive for real‑world systems where only explicit data leaks are unacceptable, but certain implicit flows (e.g., logging, debugging, performance monitoring) are benign.
To address this mismatch, the authors introduce two orthogonal sub‑calculi: a Data‑Dependency DCC (D‑DCC) that retains the original DCC semantics and guarantees strong non‑interference, and a Control‑Dependency DCC (C‑DCC) that adopts a trace‑based security model. In the trace‑based model, an observer may see the execution path (the sequence of branch decisions) but cannot infer the secret values that influenced those decisions. The C‑DCC therefore relaxes the constraints on control flow while still preventing the leakage of secret data through observable traces.
The core technical contribution is a compositional type system that annotates each term with two independent security levels: a data level ℓₙ and a control level ℓ_c. The typing rules for D‑DCC are identical to classic DCC, ensuring that any term typed at a high data level cannot be used where a low data level is expected. The C‑DCC rules, by contrast, focus on the control level: a conditional or loop is type‑checked only against ℓ_c, regardless of the data level of its guard expression. This decoupling allows a high‑security guard to drive a low‑security branch, provided the branch itself does not expose high‑security data.
A crucial “cross‑validation” mechanism ties the two lattices together. Whenever a high‑level data value influences a control construct, the system checks that the corresponding control level is sufficiently permissive; if not, the program is rejected. This mechanism eliminates the false‑positive alarms that plague purely trace‑based analyses, while still allowing the flexibility of weak security where appropriate.
The authors prove three main theorems. The first establishes that D‑DCC satisfies the classic non‑interference property. The second shows that C‑DCC guarantees trace‑based confidentiality: an attacker observing the execution trace learns nothing about high‑security data beyond what is allowed by the control lattice. The third, the “compositional security theorem,” demonstrates that a program well‑typed in both sub‑calculi simultaneously enjoys strong non‑interference for explicit flows and trace‑based confidentiality for implicit flows. The proof proceeds by first showing each sub‑calculus is sound in isolation, then leveraging the cross‑validation condition to argue that any interaction between data and control respects both security policies.
To illustrate practical impact, the paper presents two case studies. In a web application, user authentication tokens are written to a log file. Under classic DCC this would be rejected because the token (high data level) appears in a control decision that determines whether to log. Using the new system, the logging operation is treated as a control‑level effect with a low control level, and the data‑level restriction does not apply, allowing safe logging. In a cloud data‑processing pipeline, intermediate results are monitored for performance. The monitoring code reads control flags derived from high‑security data but does not expose those data values; the C‑DCC permits this because the flags are low‑level control constructs. Both examples confirm that the combined system can express policies that are impossible to capture with a single lattice.
The related‑work discussion contrasts the approach with existing information‑flow languages such as Jif, FlowCaml, and the original DCC extensions. Those systems typically enforce a monolithic lattice, leading either to over‑conservative rejections or to the need for ad‑hoc declassification mechanisms. By cleanly separating data and control lattices, the present work offers a principled, type‑based way to mix strong and weak security guarantees without sacrificing soundness.
In conclusion, the paper demonstrates that decoupling data and control dependencies yields a more expressive and flexible security framework. It preserves the rigorous guarantees of non‑interference for explicit flows while allowing trace‑based reasoning for implicit flows, thereby reconciling two traditionally competing security models. Future directions include automated inference of the two lattices, extensions to multi‑principal settings, and integration with runtime monitoring tools to enforce the composed policies dynamically.
Comments & Academic Discussion
Loading comments...
Leave a Comment