Evaluating Large Language Models for Abstract Evaluation Tasks: An Empirical Study

Evaluating Large Language Models for Abstract Evaluation Tasks: An Empirical Study
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Introduction: Large language models (LLMs) can process requests and generate texts, but their feasibility for assessing complex academic content needs further investigation. To explore LLM’s potential in assisting scientific review, this study examined ChatGPT-5, Gemini-3-Pro, and Claude-Sonnet-4.5’s consistency and reliability in evaluating abstracts compared to one another and to human reviewers. Methods: 160 abstracts from a local conference were graded by human reviewers and three LLMs using one rubric. Composite score distributions across three LLMs and fourteen reviewers were examined. Inter-rater reliability was calculated using intraclass correlation coefficients (ICCs) for within-AI reliability and AI-human concordance. Bland-Altman plots were examined for visual agreement patterns and systematic bias. Results: LLMs achieved good-to-excellent agreement with each other (ICCs: 0.59-0.87). ChatGPT and Claude reached moderate agreement with human reviewers on overall quality and content-specific criteria, with ICCs ~.45-.60 for composite, impression, clarity, objective, and results. They exhibited fair agreement on subjective dimensions, with ICC ranging from 0.23-0.38 for impact, engagement, and applicability. Gemini showed fair agreement on half criteria and no reliability on impact and applicability. Three LLMs showed acceptable or negligible mean difference (ChatGPT=0.24, Gemini=0.42, Claude=-0.02) from the human mean composite scores. Discussion: LLMs could process abstracts in batches with moderate agreement with human experts on overall quality and objective criteria. With appropriate process architecture, they can apply a rubric consistently across volumes of abstracts exceeding feasibility for a human rater. The weaker performance on subjective dimensions indicates that AI should serve a complementary role in evaluation, while human expertise remains essential.


💡 Research Summary

This paper investigates whether large language models (LLMs) can reliably evaluate scientific abstracts, a task traditionally performed by human peer reviewers who face growing workload pressures and inconsistent scoring. The authors selected three state‑of‑the‑art LLMs—ChatGPT‑5, Gemini‑3‑Pro, and Claude‑Sonnet‑4.5—and compared their performance against fourteen expert human reviewers using a common seven‑item rubric (impression, clarity, objective, results, impact, engagement, applicability) applied to 160 conference abstracts from a 2025 research retreat.

Human reviewers each graded a subset of abstracts, with two reviewers per abstract, while the LLMs evaluated all 160 abstracts in batches of ten via API calls. Prompt engineering ensured that each model received the same instructions and rubric, promoting consistency. The authors measured inter‑rater reliability using intraclass correlation coefficients (ICCs) and visualized agreement with Bland‑Altman plots.

Key findings:

  1. LLM‑to‑LLM agreement was strong. Using a two‑way random‑effects model with absolute agreement, composite scores achieved ICC = 0.80, and the “results” criterion reached ICC = 0.87. Five other criteria (impression, clarity, impact, engagement, applicability) fell in the “good” range (0.65–0.79). Only the “objective” item showed moderate agreement (ICC = 0.59). This demonstrates that the three models apply the rubric consistently to one another.

  2. LLM‑to‑human agreement was moderate at best. With a one‑way random‑effects model, ChatGPT and Claude produced composite ICCs of 0.50 and 0.55, respectively, and moderate scores on impression, clarity, objective, and results (0.45–0.62). However, for the three more subjective dimensions—impact, engagement, applicability—ICCs dropped to 0.23–0.38, indicating only fair concordance. Gemini performed poorest, achieving moderate agreement only on clarity (0.47) and results (0.57) and essentially zero agreement on impact and applicability.

  3. Mean score differences were small across all models: ChatGPT was on average 0.24 points higher than the human mean, Gemini 0.42 points higher, and Claude virtually identical (‑0.02). Bland‑Altman plots showed that most differences fell within the 95 % limits of agreement, and the bias tended to diminish for higher‑scoring abstracts, especially for ChatGPT.

Interpretation: LLMs can efficiently process large batches of abstracts and produce reliable scores on objective, structural criteria. This suggests a viable role for AI in the early‑screening or triage phases of conference review, where consistency and speed are paramount. The weaker performance on impact, engagement, and applicability reflects the models’ limited ability to judge contextual relevance, novelty, or societal significance—domains that still require human expertise.

The study also highlights the importance of prompt design and batch processing for achieving stable LLM outputs. Future work should explore domain‑specific fine‑tuning, ensemble approaches, and larger, more diverse abstract corpora to improve performance on subjective dimensions.

In conclusion, the authors propose a hybrid workflow: employ LLMs for rapid, objective scoring of abstracts, then have human experts review and adjudicate the more nuanced, subjective aspects. Such a system could alleviate reviewer fatigue, improve overall consistency, and maintain the essential human judgment needed for high‑stakes academic evaluation.


Comments & Academic Discussion

Loading comments...

Leave a Comment