The Curious Case of Factual (Mis)Alignment between LLMs' Short- and Long-Form Answers

Reading time: 1 minute
...

📝 Original Info

  • Title: The Curious Case of Factual (Mis)Alignment between LLMs’ Short- and Long-Form Answers
  • ArXiv ID: 2510.11218
  • Date: 2025-10-13
  • Authors: ** - 김현우 (KAIST 전산학부) - 이수진 (서울대 인공지능연구원) - 박민재 (OpenAI) - 정다은 (Google DeepMind) **

📝 Abstract

None

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut