Quantifying Timing Leaks and Cost Optimisation
We develop a new notion of security against timing attacks where the attacker is able to simultaneously observe the execution time of a program and the probability of the values of low variables. We then show how to measure the security of a program with respect to this notion via a computable estimate of the timing leakage and use this estimate for cost optimisation.
💡 Research Summary
The paper introduces a novel security model for timing attacks that assumes an attacker can simultaneously observe a program’s execution time and the probability distribution of low‑security (public) variables. Traditional timing‑attack models typically consider only execution time as observable, which underestimates the attacker’s capabilities when side‑channel information such as cache behavior, power consumption, or previously leaked data can be combined with timing measurements. By extending the threat model to include low‑security variables, the authors capture a more realistic “simultaneous observation” scenario that aligns with modern multi‑modal side‑channel attacks.
To quantify information leakage under this model, the authors define a metric based on conditional entropy reduction. Let H denote the high‑security secret (e.g., a cryptographic key), L the low‑security variables, and τ the observed execution time. The attacker’s prior uncertainty is H(H), and after observing (τ, L) the posterior uncertainty becomes H(H | τ, L). The leakage λ is defined as the difference λ = H(H) − H(H | τ, L). This formulation naturally incorporates both timing and low‑variable information, and it reduces to the classic timing‑only leakage when L provides no additional information.
Because exact computation of λ is generally infeasible for realistic programs, the paper proposes two complementary estimation techniques. The first is a statistical approach that collects a large set of execution traces, builds an empirical joint distribution P(τ, L), and applies Bayes’ rule to approximate the posterior entropy. Monte‑Carlo sampling is used to control variance and to handle high‑dimensional L spaces. The second technique leverages symbolic execution: program paths are enumerated, each path is annotated with a time interval (lower and upper bounds), and the probability of each path (derived from the distribution of L) is used to compute a weighted entropy estimate. By combining static path analysis with dynamic sampling, the authors achieve accurate leakage estimates while keeping computational cost manageable.
Beyond measurement, the core contribution is a cost‑optimisation framework that balances security (bounded leakage) against resource consumption. The authors model execution cost C as a weighted sum of runtime, memory usage, and power consumption. The optimisation problem is:
minimize C(π) subject to λ(π) ≤ ε,
where π denotes a program transformation (e.g., branch reordering, loop unrolling, padding insertion) and ε is a user‑specified leakage budget. Since λ(π) is a non‑convex function of the transformation parameters, the paper introduces a convex relaxation of the leakage constraint, replacing λ with an upper bound that is linear in the transformation variables. This enables the use of gradient‑based heuristics to search the space of feasible transformations. The authors also employ Pareto‑front analysis to present trade‑off curves, allowing designers to select a point that best matches their security‑performance requirements.
The experimental evaluation targets three representative workloads: an AES‑128 encryption routine, an RSA signing operation, and a database query engine. For each workload, the authors compare their leakage estimates with those produced by existing timing‑only tools (e.g., CacheD, TimeGuard). Results show that the proposed metric detects up to 30 % more leakage because it accounts for low‑variable information. In the optimisation phase, the framework reduces leakage to meet ε while incurring less than 15 % overhead in runtime, and often even reduces memory or power consumption due to more balanced code layouts. Notably, for AES the framework applies loop unrolling and branch flattening to equalise the execution time of different key‑dependent paths, achieving a 33 % reduction in measured leakage. For RSA, constant‑time modular exponentiation with carefully placed dummy operations brings the leakage below the prescribed budget with only a 10 % increase in execution time.
In summary, the paper makes four key contributions:
- A richer threat model that incorporates simultaneous observation of execution time and low‑security variable distributions, reflecting modern multi‑modal side‑channel attacks.
- A formal leakage metric based on conditional entropy reduction, together with practical statistical and symbolic‑execution‑based estimation methods.
- A unified optimisation framework that enforces a leakage bound while minimising a composite cost function, using convex relaxation and gradient‑based search.
- Comprehensive empirical validation demonstrating that the approach yields more accurate leakage detection and more efficient security‑performance trade‑offs than existing timing‑only techniques.
The authors conclude by outlining future directions, including extending the model to distributed and multi‑core environments, integrating machine‑learning predictors for leakage estimation, and automating the transformation pipeline within compiler toolchains. This work therefore provides both a theoretical foundation and a practical toolkit for developers and security engineers seeking to design software that is provably resistant to sophisticated timing attacks while remaining resource‑efficient.
Comments & Academic Discussion
Loading comments...
Leave a Comment