Three Lessons from Citizen-Centric Participatory AI Design

Three Lessons from Citizen-Centric Participatory AI Design
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

This workshop paper examines challenges in designing agentic AI systems from a citizen-centric perspective. Drawing on three participatory workshops conducted in 2025 with members of the general public and cross-sector stakeholders, we explore how societal values and expectations shape visions of future AI agents. Using constructive design research methods, participants engaged in storytelling and lo-fi prototyping to reflect on potential community impacts. We identify three key challenges: enabling meaningful and sustained public engagement, establishing a shared language between experts and lay participants, and translating speculative participant input into implementable systems. We argue that reflexive, long-term participation is essential for responsible and actionable citizen-centric AI development.


💡 Research Summary

This workshop paper reports on three participatory design sessions conducted in 2025 to explore citizen‑centric approaches to building agentic AI systems. The authors organized two initial workshops (WS1 and WS2) with members of the general public—students, retirees, journalists, clergy, and legal professionals—totaling 14 participants. In these sessions, participants engaged in story‑completion prompts (“I woke up this morning and something extraordinary/awful happened with the AI agent…”) and created low‑fidelity (Lo‑Fi) prototypes using tangible materials such as Lego bricks, Play‑Doh, magazine cut‑outs, stickers, and scrap paper. This hands‑on, narrative‑driven method was intended to surface participants’ values, hopes, and fears about future AI agents and to give them a concrete medium for expressing abstract concepts.

A third workshop (WS3) expanded the participant pool to 43 cross‑sector stakeholders, including government officials, AI researchers, industry representatives, and NGOs. Building on the 14 artefacts generated in WS1/2, WS3 asked participants to place each artefact on a timeline spanning 2025‑2035, assess feasibility, identify required resources, and discuss potential improvements, opportunities, and challenges. The design workbook used in WS3 guided participants through systematic evaluation of technical viability, ethical considerations, and infrastructural needs.

From these activities the authors distilled three interrelated challenges that they argue are central to citizen‑centric AI design:

  1. Enabling Meaningful and Sustained Engagement – Recruiting a diverse set of participants proved difficult, and the decision to use a new cohort for each workshop, while increasing breadth of perspectives, sacrificed longitudinal continuity. The authors note that participants who follow a design concept from ideation through to prototype testing and eventual deployment would develop deeper AI literacy and a stronger sense of ownership. They propose future studies retain a core group of citizens throughout the entire design lifecycle.

  2. Establishing a Shared Language Between Experts and Laypeople – The term “agent” and broader AI jargon carry multiple, sometimes contradictory, meanings. The researchers attempted to level the playing field by delivering a brief introductory briefing on AI and agentic systems, but they acknowledge that any framing inevitably reflects the researchers’ own biases. They recommend reflexive practices, co‑construction of terminology, and visual storytelling to mitigate inadvertent nudging and to ensure that all participants develop a common conceptual grounding.

  3. Translating Speculative Input into Implementable Systems – The 14 artefacts from WS1/2 embodied rich, context‑specific visions but lacked clear pathways to technical realization. WS3 served as a bridge by involving domain experts who could comment on feasibility, required infrastructure, and policy implications. The authors argue that a systematic translation pipeline is needed: each citizen‑generated idea should be paired with feasibility studies, iterative prototyping, and stakeholder feedback loops. Mapping ideas onto technical constraints, ethical imperatives, and societal needs ensures that creative citizen contributions are not lost as speculative “blue‑sky” concepts but become actionable design directions.

The paper concludes that citizen‑centric AI design must move beyond tokenistic participation (“participation washing”) toward a reflexive, long‑term, and interdisciplinary process. By embedding mechanisms for sustained engagement, co‑creating a shared vocabulary, and establishing structured translation pathways, designers can harness the diverse insights of the public while grounding them in realistic technical and policy contexts. The authors see their three lessons as a foundation for future research on participatory AI governance, suggesting that such frameworks can help produce AI systems that are both innovative and aligned with societal values.


Comments & Academic Discussion

Loading comments...

Leave a Comment