On the Measurement of Privacy as an Attackers Estimation Error
A wide variety of privacy metrics have been proposed in the literature to evaluate the level of protection offered by privacy enhancing-technologies. Most of these metrics are specific to concrete systems and adversarial models, and are difficult to generalize or translate to other contexts. Furthermore, a better understanding of the relationships between the different privacy metrics is needed to enable more grounded and systematic approach to measuring privacy, as well as to assist systems designers in selecting the most appropriate metric for a given application. In this work we propose a theoretical framework for privacy-preserving systems, endowed with a general definition of privacy in terms of the estimation error incurred by an attacker who aims to disclose the private information that the system is designed to conceal. We show that our framework permits interpreting and comparing a number of well-known metrics under a common perspective. The arguments behind these interpretations are based on fundamental results related to the theories of information, probability and Bayes decision.
💡 Research Summary
The paper addresses the fragmented landscape of privacy metrics by proposing a unified theoretical framework that defines privacy as the estimation error incurred by an adversary attempting to infer private information. The authors model the private attribute as a random variable X and the observable (potentially perturbed) data as Y. An attacker, equipped with a prior distribution p(X) and observing Y, employs a Bayes estimator \hat X(Y) to minimize a chosen loss function L(x,\hat x). The expected loss, or average risk R = E
Comments & Academic Discussion
Loading comments...
Leave a Comment