Arxiv 2512.09729

Reading time: 5 minute
...

📝 Original Info

  • Title: Arxiv 2512.09729
  • ArXiv ID: 2512.09729
  • Date: 2025-12-10
  • Authors: Laurynas Adomaitis, Vincent Israel-Jost, Alexei Grinbaum

📝 Abstract

We present Ethics Readiness Levels (ERLs), a four-level, iterative method to track how ethical reflection is implemented in the design of AI systems. ERLs bridge high-level ethical principles and everyday engineering by turning ethical values into concrete prompts, checks, and controls within real use cases. The evaluation is conducted using a dynamic, tree-like questionnaire built from context-specific indicators, ensuring relevance to the technology and application domain. Beyond being a managerial tool, ERLs help facilitate a structured dialogue between ethics experts and technical teams, while our scoring system helps track progress over time. We demonstrate the methodology through two case studies: an AI facial sketch generator for law enforcement and a collaborative industrial robot. The ERL tool effectively catalyzes concrete design changes and promotes a shift from narrow technological solutionism to a more reflective, ethics-by-design mindset.

💡 Deep Analysis

Deep Dive into Arxiv 2512.09729.

We present Ethics Readiness Levels (ERLs), a four-level, iterative method to track how ethical reflection is implemented in the design of AI systems. ERLs bridge high-level ethical principles and everyday engineering by turning ethical values into concrete prompts, checks, and controls within real use cases. The evaluation is conducted using a dynamic, tree-like questionnaire built from context-specific indicators, ensuring relevance to the technology and application domain. Beyond being a managerial tool, ERLs help facilitate a structured dialogue between ethics experts and technical teams, while our scoring system helps track progress over time. We demonstrate the methodology through two case studies: an AI facial sketch generator for law enforcement and a collaborative industrial robot. The ERL tool effectively catalyzes concrete design changes and promotes a shift from narrow technological solutionism to a more reflective, ethics-by-design mindset.

📄 Full Content

We propose the idea of Ethics Readiness Levels (ERLs) as a structured tool for recurrently evaluating the integration of ethical thinking in research and software design processes. In the governance of emerging technologies, ethical guidance has often relied on so-called soft law instruments-codes of conduct, guidelines, or frameworks-designed to promote responsible behavior without imposing binding legal constraints. This is partly due to the difficulty of imposing harmonized regulations across the EU, especially in a global context characterized by strong reservations expressed by other international actors, e.g. the United States of America, with regard to the regulation of artificial intelligence (AI) that "unduly burdens AI innovation" (Kratsios, Sacks, and Rubio 2025). Another reason is related to the principle, upheld in several member states such as Germany, that protects scientific freedom by constitutional law. Nevertheless, the recent trajectory of technological regulation in the European Union shows that soft law can evolve into hard law: this has been the case, notably, with the adoption of the AI Act (European Commission 2022;Terpan 2015). Rather than opposing soft and hard law, our concern lies in their practical adaptation: how can high-level societal values and principles, whether embedded in soft guidelines or hard regulation, be translated into operational steps for AI researchers and software developers? One of the most influential soft law frameworks in Europe is the approach of 'Responsible Research and Innovation' (RRI), promoted to foster "acceptability, sustainability, and societal desirability" in scientific and technological innovation (Von Schomberg 2013). RRI signaled a new phase of institutional awareness, acknowledging the importance of societal concerns and ethical reflection throughout the innovation chain. It laid the groundwork for methodologies such as ethics-by-design and values-by-design (Brey and Dainow 2021;Van den Hoven, Vermaas, and Van de Poel 2015), which seek to embed ethical thinking directly into the development process. From a philosophical standpoint, the ethical approach championed in RRI is essentially deontological: it aims at codifying desirable conduct through normative principles, enabling ethical judgments based on compliance. This brings it close to the logic of legal and regulatory oversight. For example, the high-level principles and values listed in recital 27 of the EU AI Act are translated into a set of technical norms and standards requiring legal compliance.

Yet this deontological approach to AI regulation has long faced criticism. It tends to reduce responsibility to preordained norms, which may not capture the evolving and context-dependent nature of responsibility (Alexei Grinbaum and Groves 2013). Furthermore, these principles require a level of operationalization beyond usual legal parlance, making them technically meaningful and actionable for AI practitioners. For example, the principle of “respect for human rights” cannot simply remain a statement in natural language. As Lessig famously put it, “code is law” (Lessig 2000) if a principle cannot be translated into design decisions or constraints on the behavior of AI systems, it lacks operational effect. What exactly “respect for human rights” means in an operational setting is highly context-dependent: which humans, which rights, which criteria of respect cannot be decided via a preordained norm. The emergence of large language models (LLMs) adds a new twist to this well-known observation of content-dependence in technology ethics: AI systems seem to understand and respond to normative instructions expressed in natural language, giving the impression that principles such as “respect for human rights” could be operationalized simply as a consequence of being fed to the AI system in a prompt. This illusion has led to the development of approaches like “constitutional AI” (Kundu et al. 2023), in which LLMs are fine-tuned with high-level ethical directives. It creates a misleading sense of moral adequacy: the model only creates an illusion of understanding in the user, nor does it reason morally. It merely reproduces linguistic associations that correlate with certain user expectations, while robustness is lacking. As a result, users may believe for some time that the system is behaving based on rules-until an unexpected output breaks that illusion and reveals the absence of genuine ethical grounding (A. Grinbaum 2019). Rather than resolving the problem of operationalization, LLMs risk masking it.

The AI ethics community has increasingly turned toward more structured and transparent forms of operationalizing high-level ethical principles. A central example is the ALTAI checklist (European Commission 2020), as well as the GPAI code of practice. Both documents translate abstract values into a set of design questions addressed to software developers. These checklists-sometimes tailored to a s

…(Full text truncated)…

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut