MARIA: A Framework for Marginal Risk Assessment without Ground Truth in AI Systems

Reading time: 1 minute
...

📝 Original Info

  • Title: MARIA: A Framework for Marginal Risk Assessment without Ground Truth in AI Systems
  • ArXiv ID: 2510.27163
  • Date: 2025-10-31
  • Authors: 정보 제공되지 않음 (논문에 저자 정보가 명시되지 않음)

📝 Abstract

Before deploying an AI system to replace an existing process, it must be compared with the incumbent to ensure improvement without added risk. Traditional evaluation relies on ground truth for both systems, but this is often unavailable due to delayed or unknowable outcomes, high costs, or incomplete data, especially for long-standing systems deemed safe by convention. The more practical solution is not to compute absolute risk but the difference between systems. We therefore propose a marginal risk assessment framework, that avoids dependence on ground truth or absolute risk. It emphasizes three kinds of relative evaluation methodology, including predictability, capability and interaction dominance. By shifting focus from absolute to relative evaluation, our approach equips software teams with actionable guidance: identifying where AI enhances outcomes, where it introduces new risks, and how to adopt such systems responsibly.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut