AACR-Bench: Evaluating Automatic Code Review with Holistic Repository-Level Context
High-quality evaluation benchmarks are pivotal for deploying Large Language Models (LLMs) in Automated Code Review (ACR). However, existing benchmarks suffer from two critical limitations: first, the lack of multi-language support in repository-level contexts, which restricts the generalizability of evaluation results; second, the reliance on noisy, incomplete ground truth derived from raw Pull Request (PR) comments, which constrains the scope of issue detection. To address these challenges, we introduce AACR-Bench a comprehensive benchmark that provides full cross-file context across multiple programming languages. Unlike traditional datasets, AACR-Bench employs an “AI-assisted, Expert-verified” annotation pipeline to uncover latent defects often overlooked in original PRs, resulting in a 285% increase in defect coverage. Extensive evaluations of mainstream LLMs on AACR-Bench reveal that previous assessments may have either misjudged or only partially captured model capabilities due to data limitations. Our work establishes a more rigorous standard for ACR evaluation and offers new insights on LLM based ACR, i.e., the granularity/level of context and the choice of retrieval methods significantly impact ACR performance, and this influence varies depending on the LLM, programming language, and the LLM usage paradigm e.g., whether an Agent architecture is employed. The code, data, and other artifacts of our evaluation set are available at https://github.com/alibaba/aacr-bench .
💡 Research Summary
The paper introduces AACR‑Bench, a novel benchmark designed to evaluate large language model (LLM)‑driven Automated Code Review (ACR) with comprehensive repository‑level context across multiple programming languages. Existing ACR benchmarks suffer from two major shortcomings: (1) they rely on raw pull‑request (PR) comments as ground truth, which leads to noisy and incomplete issue annotations, and (2) they are typically limited to a single language and file‑level context, preventing realistic assessment of cross‑file defects.
To overcome these limitations, the authors construct a dataset comprising 200 PRs drawn from 50 popular GitHub repositories, covering ten mainstream languages (JavaScript, Python, TypeScript, Java, C#, C++, C, PHP, Go, Rust). Each PR contains ≤1,000 changed lines, has English titles/descriptions, and includes at least two inline comments with at least one adopted comment that resulted in code changes. The PRs are stratified by repository, problem domain, and change size to ensure diversity.
The benchmark’s core innovation is an “AI‑assisted, expert‑verified” annotation pipeline. Six state‑of‑the‑art LLMs (Claude‑4.5‑Sonnet, Qwen‑480B‑Coder, GPT‑5.2, DeepSeek‑V3.2, GLM‑4.7, Gemini‑3‑Pro) are used to augment the original 391 human PR comments, generating additional review comments. These AI‑generated comments are then merged with the original set, de‑duplicated, and subjected to rigorous human validation by 80 senior software engineers (≥2 years of industry experience). Each comment is reviewed independently by at least two annotators; disagreements are resolved by a six‑member core team. The final ground‑truth consists of 1,505 fine‑grained review comments, representing a 285 % increase in defect coverage compared to prior datasets.
Importantly, each comment is annotated with the required context level: Diff‑only (can be judged from the changed hunk), File‑level (needs the whole file), or Repo‑level (needs other files, PR metadata, or broader repository semantics). This enables granular analysis of how much context a model needs to detect a given issue.
The authors evaluate five open‑source LLMs (Qwen‑480B‑Coder, DeepSeek‑V3.2, GLM‑4.7, plus two others) and two commercial models (GPT‑5.2, Claude‑4.5‑Sonnet) on AACR‑Bench. Four retrieval strategies are compared: (a) No context (baseline), (b) BM25 keyword similarity, (c) Embedding‑based retrieval using Qwen‑Embedding‑8B, and (d) an Agent‑based approach (Claude Code) that can actively query the repository. For similarity‑based methods, three code snippets are retrieved uniformly; the Agent method retrieves context dynamically.
Performance is measured per PR using Precision, Recall, and F1, both overall and broken down by context level. Key findings include:
- Modern LLMs achieve higher absolute scores on AACR‑Bench than on prior benchmarks, confirming that richer ground truth reveals more of their capabilities.
- Retrieval strategy has a substantial impact: the Agent‑based method yields the largest gains for repo‑level comments (up to +22 % F1), while BM25 performs adequately for diff‑only issues.
- Language‑specific trends emerge: statically typed languages (TypeScript, Java) benefit more from file‑level context, whereas dynamically typed languages (JavaScript, Python) see pronounced improvements when full repository context is supplied.
- Model size and openness are less predictive of performance than the alignment between the model’s reasoning style and the retrieval method; smaller models with effective retrieval can rival larger commercial models on certain tasks.
The paper positions AACR‑Bench as the first multilingual, repository‑aware ACR benchmark, filling a gap between file‑level datasets (e.g., CodeReviewer, ContextCRBench) and single‑language repository datasets (e.g., SWR‑Bench, CodeFuse‑CR‑Bench). By providing both extensive defect coverage and explicit context‑level annotations, AACR‑Bench enables researchers to diagnose precisely where a model’s reasoning fails—whether due to insufficient context, language‑specific nuances, or retrieval shortcomings.
Limitations acknowledged include the modest scale (200 PRs) and focus on open‑source projects; future work will expand the benchmark to larger corpora, include more domain‑specific repositories (e.g., mobile, embedded), and explore automated methods for context‑level labeling.
In summary, AACR‑Bench establishes a more realistic and rigorous evaluation framework for LLM‑based automated code review, demonstrating that both the granularity of context and the choice of retrieval mechanism critically affect performance, and that these effects vary across languages and model architectures. This work provides a valuable resource for the community and a clear roadmap for building more capable, context‑aware ACR systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment