Privacy and Data Protection by Design - from policy to engineering
Privacy and data protection constitute core values of individuals and of democratic societies. There have been decades of debate on how those values -and legal obligations- can be embedded into systems, preferably from the very beginning of the design process. One important element in this endeavour are technical mechanisms, known as privacy-enhancing technologies (PETs). Their effectiveness has been demonstrated by researchers and in pilot implementations. However, apart from a few exceptions, e.g., encryption became widely used, PETs have not become a standard and widely used component in system design. Furthermore, for unfolding their full benefit for privacy and data protection, PETs need to be rooted in a data governance strategy to be applied in practice. This report contributes to bridging the gap between the legal framework and the available technological implementation measures by providing an inventory of existing approaches, privacy design strategies, and technical building blocks of various degrees of maturity from research and development. Starting from the privacy principles of the legislation, important elements are presented as a first step towards a design process for privacy-friendly systems and services. The report sketches a method to map legal obligations to design strategies, which allow the system designer to select appropriate techniques for implementing the identified privacy requirements. Furthermore, the report reflects limitations of the approach. It concludes with recommendations on how to overcome and mitigate these limits.
💡 Research Summary
The paper addresses the longstanding challenge of embedding privacy and data‑protection values—mandated by law—into the earliest stages of system design. While privacy‑enhancing technologies (PETs) have demonstrated technical effectiveness in research and pilot projects, only a few, such as encryption, have become mainstream components of software architecture. The authors argue that the limited adoption of most PETs stems from a disconnect between legal obligations and engineering practice, and from the absence of a coherent data‑governance framework that can operationalise these technologies.
The report first extracts the core principles of major privacy regulations (e.g., GDPR, CCPA, Korean Personal Information Protection Act) – purpose limitation, data minimisation, transparency, data‑subject rights, and security – and maps them onto a set of seven privacy‑design strategies: (1) data minimisation, (2) pseudonymisation/anonimisation, (3) encryption, (4) access control and authentication, (5) audit and monitoring, (6) user‑control and consent management, and (7) transparency and explainability. For each strategy the authors inventory existing PETs, classifying them by maturity level (concept, prototype, commercial) and by the technical prerequisites required for deployment (e.g., computational overhead, infrastructure dependencies, legal interpretability).
Key observations emerge from this mapping. Encryption is already widely deployed, whereas techniques such as differential privacy, federated learning, homomorphic encryption, and zero‑knowledge proofs remain largely experimental. The gap is attributed to factors including algorithmic complexity, performance penalties, lack of standardised metrics for privacy guarantees, and uncertainty about how regulators will interpret novel technical safeguards.
To bridge the gap, the authors propose integrating PET selection into a broader data‑governance framework. Essential governance artefacts include a data catalogue, metadata management, designated data stewards, risk‑assessment procedures, and policy‑automation tools. They introduce two artefacts: a Privacy Requirements Specification (PRDS) that translates legal clauses into concrete technical requirements, and a Privacy Design Matrix (PDM) that links each requirement to candidate PETs and associated trade‑offs.
The paper also critiques current practice. Legal principles are often abstract, making automated mapping difficult; there is no universally accepted metric for quantifying PET effectiveness, hampering cost‑benefit analyses; and organisational factors—skill gaps, cultural resistance, and legacy systems—inflate adoption costs. In response, the authors outline a “Privacy Engineering Workflow” comprising five stages: (1) legal and regulatory analysis, (2) selection of privacy‑design strategies, (3) evaluation of PET candidates, (4) prototype implementation and verification, and (5) operational monitoring and continuous improvement. Each stage is equipped with checklists and quality gates to ensure compliance and technical soundness.
Finally, the report issues concrete recommendations for policymakers, standards bodies, and industry. It calls for the creation of a dedicated privacy‑by‑design standard (e.g., an ISO/IEC series) that codifies PET maturity levels and implementation guidelines. It advocates for education and certification programmes to build a skilled privacy‑engineer workforce, and for public‑private pilot initiatives that generate real‑world evidence of PET efficacy. Moreover, it suggests establishing a “technical evidence repository” that documents how specific PETs satisfy regulatory requirements, thereby providing regulators with a transparent basis for assessment.
In sum, the paper provides a systematic bridge from privacy legislation to concrete engineering actions, offering a taxonomy of design strategies, an inventory of technical building blocks, a governance‑centric methodology for mapping legal duties to technical solutions, and a set of actionable recommendations aimed at normalising the use of PETs in everyday system development.
Comments & Academic Discussion
Loading comments...
Leave a Comment