From Clicks to Consensus: Collective Consent Assemblies for Data Governance

From Clicks to Consensus: Collective Consent Assemblies for Data Governance
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Obtaining meaningful and informed consent from users is essential for ensuring autonomy and control over one’s data. Notice and consent, the standard for collecting consent, has been criticized. While other individualized solutions have been proposed, this paper argues that a collective approach to consent is worth exploring. First, individual consent is not always feasible to collect for all data collection scenarios. Second, harms resulting from data processing are often communal in nature, given the interconnected nature of some data. Finally, ensuring truly informed consent for every individual has proven impractical. We propose collective consent, operationalized through consent assemblies, as one alternative framework. We establish collective consent’s theoretical foundations and use speculative design to envision consent assemblies leveraging deliberative mini-publics. We present two vignettes: i) replacing notice and consent, and ii) collecting consent for GenAI model training. Our paper employs future backcasting to identify the requirements for realizing collective consent and explores its potential applications in contexts where individual consent is infeasible.


💡 Research Summary

The paper “From Clicks to Consensus: Collective Consent Assemblies for Data Governance” critiques the dominant notice‑and‑consent paradigm that underpins GDPR, ePrivacy, CCPA and similar regulations. The authors argue that this model suffers from severe usability problems (overly long legalese, dark‑pattern designs), creates consent fatigue, and fails to provide genuinely informed, meaningful consent—especially for vulnerable groups such as children, the elderly, or users with limited digital literacy. Moreover, the authors point out that many data‑related harms are collective: data from social networks, location traces, or public‑space surveillance affect groups rather than isolated individuals, and the reuse of already‑collected data for generative AI training makes retroactive or future‑use consent practically impossible.

To address these shortcomings, the authors propose a “collective consent” framework operationalized through “consent assemblies.” Drawing on the deliberative mini‑publics literature, a consent assembly is a randomly selected, demographically representative panel that stands in for all individuals affected by a given data‑processing activity. The assembly receives balanced information from data controllers, regulators, civil‑society experts, and possibly affected users, then deliberates and issues a collective decision on whether, under what conditions, and with what safeguards the data may be used. Core design principles include representativeness, transparency (e.g., auditable logs, possibly blockchain‑based), dynamic consent (the ability to revoke or modify decisions), and accountability (binding outcomes for regulators and firms).

The paper illustrates the concept with two speculative vignettes. In the first, traditional website cookie notices are replaced by regional consent assemblies that evaluate and approve data‑collection policies on behalf of their constituents, eliminating per‑site pop‑ups. In the second, a national‑level assembly reviews proposals to use large‑scale datasets for training generative AI models, imposing restrictions on sensitive categories, mandating compensation, or requiring differential‑privacy safeguards. Both vignettes are explored through speculative design methods, showing how assemblies could be convened, how information would be presented, and how decisions would be communicated to stakeholders.

To move from speculation to practice, the authors employ future backcasting, targeting a 2035 horizon. They identify five clusters of requirements: (1) policy and legal reforms (e.g., amending GDPR to recognize collective consent as a lawful basis), (2) technical infrastructure (digital platforms for random sampling, secure deliberation tools, transparent decision‑recording), (3) social acceptance (public education, trust‑building campaigns), (4) economic incentives (funding mechanisms for assembly operation, compensation models for data contributors), and (5) stakeholder alignment (engaging industry, regulators, NGOs to co‑design the process). The paper also discusses potential pathways for embedding assembly outcomes into existing compliance frameworks, such as regulatory sandboxes or binding administrative orders.

The authors compare collective consent with existing alternatives—browser‑level Global Privacy Control, “reject‑all” extensions, and predictive‑consent AI agents. They argue that while these tools reduce the number of individual clicks, they still leave the burden of granular decision‑making on users, are vulnerable to non‑adoption by dominant platforms, and do not address collective harms or future‑use scenarios. In contrast, consent assemblies can internalize societal values, provide a venue for public deliberation on trade‑offs, and generate decisions that are both scalable and socially legitimate.

Nevertheless, the paper acknowledges substantial challenges: ensuring true representativeness, managing the cost and logistics of regular assemblies, establishing the legal enforceability of collective decisions, and overcoming industry resistance to ceding control over data‑use policies. The authors suggest pilot studies, interdisciplinary collaborations, and incremental policy experiments (e.g., limited‑scope assemblies for specific high‑risk domains) as next steps.

In sum, the paper offers a bold re‑imagining of data governance: shifting from an individual‑centric, click‑based consent model to a democratic, deliberative collective consent mechanism. By embedding consent decisions in structured, representative assemblies, the approach aims to protect privacy, reduce user burden, and better align data practices with societal values in an era of pervasive data collection and generative AI. Future work should focus on prototyping assemblies, measuring their effectiveness, and adapting the model across different legal and cultural contexts.


Comments & Academic Discussion

Loading comments...

Leave a Comment