Exploring the features used for summary evaluation by Human and GPT
Reading time: 1 minute
...
📝 Original Info
- Title: Exploring the features used for summary evaluation by Human and GPT
- ArXiv ID: 2512.19620
- Date: 2025-12-22
- Authors: ** 정보 제공되지 않음 (논문에 저자 명시가 없습니다). **
📝 Abstract
Summary assessment involves evaluating how well a generated summary reflects the key ideas and meaning of the source text, requiring a deep understanding of the content. Large Language Models (LLMs) have been used to automate this process, acting as judges to evaluate summaries with respect to the original text. While previous research investigated the alignment between LLMs and Human responses, it is not yet well understood what properties or features are exploited by them when asked to evaluate based on a particular quality dimension, and there has not been much attention towards mapping between evaluation scores and metrics. In this paper, we address this issue and discover features aligned with Human and Generative Pre-trained Transformers (GPTs) responses by studying statistical and machine learning metrics. Furthermore, we show that instructing GPTs to employ metrics used by Human can improve their judgment and conforming them better with human responses.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.