Auto-BenchmarkCard: Automated Synthesis of Benchmark Documentation

Reading time: 4 minute
...

📝 Original Info

  • Title: Auto-BenchmarkCard: Automated Synthesis of Benchmark Documentation
  • ArXiv ID: 2512.09577
  • Date: 2025-12-10
  • Authors: Aris Hofmann, Inge Vejsbjerg, Dhaval Salwala, Elizabeth M. Daly

📝 Abstract

We present Auto-BenchmarkCard, a workflow for generating validated descriptions of AI benchmarks. Benchmark documentation is often incomplete or inconsistent, making it difficult to interpret and compare benchmarks across tasks or domains. Auto-BenchmarkCard addresses this gap by combining multi-agent data extraction from heterogeneous sources (e.g., Hugging Face, Unitxt, academic papers) with LLM-driven synthesis. A validation phase evaluates factual accuracy through atomic entailment scoring using the FactReasoner tool. This workflow has the potential to promote transparency, comparability, and reusability in AI benchmark reporting, enabling researchers and practitioners to better navigate and evaluate benchmark choices.

💡 Deep Analysis

Figure 1

📄 Full Content

Benchmarks are vital in AI for standardizing tasks, enabling model evaluation, tracking progress, and setting baseline expectations (Reuel et al. 2024). Appropriate benchmarks systematically detect, assess, and mitigate risks (Sokol et al. 2024). Unsuitable benchmarks may risk leaving failure modes undetected, leading to deployment with unverified, poorly understood behaviors. Choosing appropriate benchmarks ensures models are evaluated on relevant tasks, avoiding inaccurate assessments and missed risks.

Benchmark documentation is often limited, requiring developers to parse source code or consult reference papers to understand the details of the benchmark. Recently, (Sokol et al. 2024) proposed a standardized representation of benchmark metadata, drawing inspiration from existing standards such as model cards (Mitchell et al. 2019) and dataset documentation frameworks like Croissant (Akhtar et al. 2024). Sokol’s framework defines key benchmark metadata such as “purpose,” “methodology,” and “risks” in order to support informed selection, clearer stakeholder communication, and better understanding of objectives and limitations. However, creating benchmark cards by hand is time-and laborintensive, posing a significant barrier to widespread adoption across the community. A large-scale analysis of Model Cards shows frequent completion of “Model Description” and “Training Procedure,” but critical fields-evaluation, limitations, and risks-are often left blank (Huggingface Copyright © 2026, Association for the Advancement of Artificial Intelligence (www.aaai.org ). All rights reserved. 2025). Regarding risks, (Rao et al. 2025) found that only 14% of AI model cards mention risks, and 96% of those were identical. This imbalance highlights the practical challenges of achieving comprehensive documentation through manual effort alone. To mitigate this, we introduce an automated workflow aimed at generating BenchmarkCards. The system employs a multi-agent architecture to extract relevant information from heterogeneous sources, including Unitxt, Hugging Face repositories, and associated academic publications. Content is structured into a BenchmarkCard based on Sokol’s schema (Sokol et al. 2024) and validated for factual consistency against source data.

The workflow has three phases: Extraction, Composition, and Validation, illustrated in Figure 1. Access is provided via a Python CLI, and the system1 is available in open source.

Extraction Phase: This phase collects structured benchmark data from multiple sources using modular custom agent tools. The implementation currently supports Unitxt but can be adapted for other standards like lm-eval-harness. Users begin by specifying a benchmark identifier for the Unitxt Tool, built upon the Unitxt library (Bandel et al. 2024), which searches its catalog and retrieves the corresponding UnitxtCard. The retrieved card is then parsed to identify cited materials (e.g., metrics, templates) and retrieve related supplementary cards, with the result returned in JSON format. Next, the Extractor Tool extracts identifiers from the JSON such as the Hugging Face repository ID and the publication URL for subsequent processing. The Hugging Face Tool then extracts metadata from the benchmark’s repository. Finally, the Docling Tool (Livathinos et al. 2025) processes the benchmark’s associated research publication, converting it into machine-readable markdown format.

Composition Phase: The extracted data is passed to a large language model (LLM), which generates a complete BenchmarkCard by filling predefined sections such as purpose, methodology, and limitations. Once the initial card is generated, the system passes it to the Risk Atlas Nexus framework (Bagehorn et al. 2025). The risk identifier component flags potential risks based on a structured risk taxon- Validation Phase: Validation plays a critical role by introducing a structured approach to verifying the factual accuracy of the initial BenchmarkCard. To address consistency challenges, especially conflicting details, we use FactReasoner, a probabilistic framework for assessing factual consistency via natural language inference (Marinescu et al. 2025). We extend the package with custom components for atomization and context retrieval, specifically adapted to the BenchmarkCard format. The validation process begins by breaking down the BenchmarkCard into atomic statements: small, self-contained units of meaning that can each be checked for accuracy. This step is performed using an LLM with a prompt tailored to the structure and content of Bench-markCards. In contrast to generic atomization, this approach ensures that the resulting statements are not only minimal but also and explicitly designed to be fact-checkable.

To assess each statement’s validity, we reference content from the Extraction Phase. The extracted metadata can be extensive, so evaluation by comparing each atomic statement against the full knowledge base is inefficient. To

📸 Image Gallery

AutoBenchmark_drawio2.png factreasoner.png figure1.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut