Beyond Abstract Compliance: Operationalising trust in AI as a moral relationship

Beyond Abstract Compliance: Operationalising trust in AI as a moral relationship
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Dominant approaches, e.g. the EU’s “Trustworthy AI framework”, treat trust as a property that can be designed for, evaluated, and governed according to normative and technical criteria. They do not address how trust is subjectively cultivated and experienced, culturally embedded, and inherently relational. This paper proposes some expanded principles for trust in AI that can be incorporated into common development methods and frame trust as a dynamic, temporal relationship, which involves transparency and mutual respect. We draw on relational ethics and, in particular, African communitarian philosophies, to foreground the nuances of inclusive, participatory processes and long-term relationships with communities. Involving communities throughout the AI lifecycle can foster meaningful relationships with AI design and development teams that incrementally build trust and promote more equitable and context-sensitive AI systems. We illustrate how trust-enabling principles based on African relational ethics can be operationalised, using two use-cases for AI: healthcare and education.


💡 Research Summary

The paper opens by critiquing the dominant “trust‑by‑design” paradigm that underpins most contemporary AI governance frameworks, such as the European Commission’s Trustworthy AI, OECD, and UNESCO guidelines. These frameworks treat trust as a property that can be engineered, measured, and certified through checklists, fairness metrics, model cards, explainability toolkits, and algorithmic impact assessments. While technically useful, the authors argue that this compliance‑oriented view abstracts away the lived, cultural, and relational dimensions of trust that users and communities actually experience.

Drawing on socio‑technical literature, the authors emphasize that trust is not a static attribute of a system but an emergent, negotiated relationship among users, developers, institutions, and the broader sociotechnical assemblage. Transparency, for example, only builds trust when it aligns with users’ goals and social roles; fairness metrics cannot resolve deep‑seated power asymmetries; and “ethics washing” can occur when organizations merely tick boxes without altering underlying power structures.

To move beyond this abstraction, the paper introduces relational ethics and, more specifically, the African communitarian philosophy of Ubuntu (“I am because we are”). Ubuntu foregrounds personhood as constituted through reciprocal relationships, mutual care, and collective well‑being. The authors argue that embedding Ubuntu into AI design can re‑orient trust from an individualistic, risk‑calculus to a moral relationship that obliges all stakeholders to act with honesty, responsiveness, and respect.

From this philosophical grounding, four concrete “trust‑by‑design” principles are derived:

  1. Communitarianism – Design processes must recognize and prioritize the collective good of the community, not just individual rights.
  2. Respect for Others – Continuous listening to users and other stakeholders, and the mitigation of power imbalances through inclusive interfaces and feedback loops.
  3. Integrity – Full transparency about data sources, model provenance, and decision‑making pathways, coupled with clear accountability structures.
  4. Design Publicity – Open documentation of design decisions, allowing communities to audit, comment on, and co‑shape the system.

These principles are positioned as complementary to existing technical checklists; they add a relational layer that demands long‑term collaboration with the communities that will be affected by the AI system. The authors propose concrete integration steps within an agile development lifecycle: participatory requirement gathering workshops, co‑design sessions during prototyping, and continuous impact assessments that involve community representatives.

Two illustrative case studies demonstrate operationalisation. In healthcare, a “community‑based diagnostic assistance” system is co‑created by patients, clinicians, and local health committees. Data collection, model training, and result interpretation are all conducted transparently, with regular community oversight meetings that can request model revisions. In education, a “collaborative learning analytics platform” invites teachers, students, and parents to jointly define curriculum goals, select AI‑driven tutoring tools, and monitor algorithmic recommendations through an open dashboard. Both cases show how the four principles translate into concrete practices—transparent data pipelines, documented decision logs, and iterative community feedback—that gradually build trust over time.

The paper concludes that reconceptualising trust as a moral, relational construct and grounding it in Ubuntu offers a viable pathway to enrich current AI ethics frameworks. It highlights the need for further research on standardising these principles, testing them across diverse cultural contexts, and developing longitudinal metrics that capture trust as a dynamic relationship rather than a one‑off compliance score.


Comments & Academic Discussion

Loading comments...

Leave a Comment