Activations as Features: Probing LLMs for Generalizable Essay Scoring Representations

Reading time: 1 minute
...

📝 Original Info

  • Title: Activations as Features: Probing LLMs for Generalizable Essay Scoring Representations
  • ArXiv ID: 2512.19456
  • Date: 2025-12-22
  • Authors: ** 논문에 저자 정보가 제공되지 않았습니다. (제공되지 않음) **

📝 Abstract

Automated essay scoring (AES) is a challenging task in cross-prompt settings due to the diversity of scoring criteria. While previous studies have focused on the output of large language models (LLMs) to improve scoring accuracy, we believe activations from intermediate layers may also provide valuable information. To explore this possibility, we evaluated the discriminative power of LLMs' activations in cross-prompt essay scoring task. Specifically, we used activations to fit probes and further analyzed the effects of different models and input content of LLMs on this discriminative power. By computing the directions of essays across various trait dimensions under different prompts, we analyzed the variation in evaluation perspectives of large language models concerning essay types and traits. Results show that the activations possess strong discriminative power in evaluating essay quality and that LLMs can adapt their evaluation perspectives to different traits and essay types, effectively handling the diversity of scoring criteria in cross-prompt settings.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut