Social, Legal, Ethical, Empathetic and Cultural Norm Operationalisation for AI Agents

Social, Legal, Ethical, Empathetic and Cultural Norm Operationalisation for AI Agents
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

As AI agents are increasingly used in high-stakes domains like healthcare and law enforcement, aligning their behaviour with social, legal, ethical, empathetic, and cultural (SLEEC) norms has become a critical engineering challenge. While international frameworks have established high-level normative principles for AI, a significant gap remains in translating these abstract principles into concrete, verifiable requirements. To address this gap, we propose a systematic SLEEC-norm operationalisation process for determining, validating, implementing, and verifying normative requirements. Furthermore, we survey the landscape of methods and tools supporting this process, and identify key remaining challenges and research avenues for addressing them. We thus establish a framework - and define a research and policy agenda - for developing AI agents that are not only functionally useful but also demonstrably aligned with human norms and values.


💡 Research Summary

The paper tackles a pressing problem in the deployment of artificial‑intelligence agents in high‑stakes domains such as healthcare, law enforcement, transportation, and education: how to ensure that these agents act in accordance with a broad set of human norms—social, legal, ethical, empathetic, and cultural (collectively abbreviated as SLEEC). While numerous international AI governance frameworks (UNESCO’s Recommendation on the Ethics of AI, OECD AI Principles, IEEE standards, the EU AI Act, GDPR, etc.) articulate high‑level principles like fairness, transparency, privacy, and human dignity, they stop short of providing concrete, verifiable requirements that can be directly engineered into a system. The authors therefore introduce the concept of “SLEEC‑norm operationalisation,” a systematic process that translates abstract normative principles into precise, machine‑interpretable rules, validates those rules, implements them in an AI agent, and finally verifies compliance before deployment.

The core contribution is a five‑stage operationalisation pipeline, illustrated in Figure 1 of the paper:

  1. AI Agent Capability Specification – Identify the agent’s functional capabilities (sensors, actuators, data access, APIs, etc.) and map each capability to the SLEEC norms it potentially triggers or must satisfy. For example, a camera raises privacy concerns, while an emergency‑call function invokes a duty‑of‑care obligation. The output is an informal capability list together with user‑oriented use‑case descriptions.

  2. SLEEC Requirements Elicitation – Conduct multi‑disciplinary stakeholder workshops (ethicists, lawyers, sociologists, developers, regulators, end‑users) to (a) select relevant high‑level principles, (b) derive “principle proxies” that capture the essence of each principle in operational terms (e.g., “obtain user assent” as a proxy for autonomy), and (c) encode these proxies as concrete rules using a domain‑specific language (DSL). The DSL follows the pattern `id when triggerEvent


Comments & Academic Discussion

Loading comments...

Leave a Comment