Beyond Procedural Compliance: Human Oversight as a Dimension of Well-being Efficacy in AI Governance
Reading time: 5 minute
...
📝 Original Info
Title: Beyond Procedural Compliance: Human Oversight as a Dimension of Well-being Efficacy in AI Governance
ArXiv ID: 2512.13768
Date: 2025-12-15
Authors: Yao Xie, Walter Cullen
📝 Abstract
Major AI ethics guidelines and laws, including the EU AI Act, call for effective human oversight, but do not define it as a distinct and developable capacity. This paper introduces human oversight as a well-being capacity, situated within the emerging Well-being Efficacy framework. The concept integrates AI literacy, ethical discernment, and awareness of human needs, acknowledging that some needs may be conflicting or harmful. Because people inevitably project desires, fears, and interests into AI systems, oversight requires the competence to examine and, when necessary, restrain problematic demands.
The authors argue that the sustainable and cost-effective development of this capacity depends on its integration into education at every level, from professional training to lifelong learning. The frame of human oversight as a well-being capacity provides a practical path from high-level regulatory goals to the continuous cultivation of human agency and responsibility essential for safe and ethical AI. The paper establishes a theoretical foundation for future research on the pedagogical implementation and empirical validation of well-being effectiveness in multiple contexts.
💡 Deep Analysis
📄 Full Content
Beyond Procedural Compliance: Human Oversight as
a Dimension of Well-being Efficacy in AI Governance
Yao Xie
School of Medicine, University College Dublin
Dublin, Ireland
toyaoxie@gmail.com
Walter Cullen
School of Medicine, University College Dublin
Dublin, Ireland
walter.cullen@ucd.ie
Abstract
Major AI ethics guidelines and laws, including the EU AI Act, call for effective
human oversight, but do not define it as a distinct and developable capacity. This
paper introduces human oversight as a well-being capacity, situated within the
emerging Well-being Efficacy framework. The concept integrates AI literacy,
ethical discernment, and awareness of human needs, acknowledging that some
needs may be conflicting or harmful. Because people inevitably project desires,
fears, and interests into AI systems, oversight requires the competence to examine
and, when necessary, restrain problematic demands.
The authors argue that the sustainable and cost-effective development of this ca-
pacity depends on its integration into education at every level, from professional
training to lifelong learning. The frame of human oversight as a well-being capacity
provides a practical path from high-level regulatory goals to the continuous culti-
vation of human agency and responsibility essential for safe and ethical AI. The
paper establishes a theoretical foundation for future research on the pedagogical
implementation and empirical validation of well-being effectiveness in multiple
contexts.
1
Introduction
The contemporary world is entering an era defined by artificial intelligence (AI) (Xu et al., 2024).
Driven by rapid innovation, AI transforms how people learn, work, communicate, and decide
(Abuzaid, 2024; Afroogh et al., 2024). This movement marks an irreversible shift toward AI-mediated
environments where intelligent systems increasingly shape everyday life and well-being(Singh &
Tholia, 2024). The transformation brings new opportunities for creativity and efficiency but also
exposes people to losses in agency, coherence, and collective trust. The speed of technological
change, the weakening of human voice, and public indifference toward data protection reveal how
easily individuals trade agency for convenience (Yatani et al., 2024). These conditions make human
oversight not only a technical necessity but also a fundamental capacity for collective well-being
(Corrêa et al., 2025; Langer et al., 2024; Sterz et al.).
Human oversight has become a central principle of global AI governance (Koulu, 2020). The
European Union AI Act, the OECD AI Principles, and the UNESCO Recommendation on the Ethics
of Artificial Intelligence all emphasise its role in keeping technology aligned with human priorities
(Enqvist, 2023). These frameworks recognise that AI should not operate without human judgment.
Second Conference of the International Association for Safe and Ethical Artificial Intelligence (IASEAI’26).
However, their practical interpretations often focus on institutional examples: a clinician approving
an algorithmic diagnosis, a financial analyst authorising an automated transaction, or a teacher
monitoring AI-assisted assessment (Koulu, 2020). These examples describe regulated professional
contexts but fail to reflect the broader scope of AI influence such as the subtle ways in which design
and daily use shape human experience (Koulu, 2020).
Most people encounter AI in ordinary contexts rather than in professional ones. They interact with
recommendation systems that shape preferences, social media feeds that influence belief, and digital
platforms that collect and reuse personal data in what can be described as hybrid space— ‘refer
to merging physical and digital spaces’(de Souza e Silva et al., 2025). Public institutions also
increasingly depend on algorithmic tools in police, welfare allocation, and recruitment. In such
environments, oversight becomes diffuse and largely invisible (Koulu, 2020; Sterz et al.). Current
regulations seem to assume that people already have the awareness and critical discernment needed
to make informed choices. In reality, many people engage with AI outputs passively, lacking the
reflective capacity necessary for effective oversight (Chen et al., 2023). Humans also have inherent
limitations in decision making, often relying on cognitive shortcuts that reduce awareness and
deliberation (Curran, 2015; Dale, 2015; Yoder & Decety, 2018).
The governance challenge extends beyond technical regulation to the cultivation of human awareness
and agency across all levels of society (Sigfrids et al., 2023; Van Popering-Verkerk et al., 2022).
Governance cannot depend solely on external rules or audits, it requires stronger human capacity
(Enqvist, 2023; Yeung et al., 2020). As AI becomes embedded in daily life, the question is not
how a single person supervises a specific system, but how human agency can be maintained in
complex, distributed, and rapidly evolving environments. This challenge calls for scalable human
oversight—the capa