KITE: A Benchmark for Evaluating Korean Instruction-Following Abilities in Large Language Models

Reading time: 2 minute
...

📝 Original Info

  • Title: KITE: A Benchmark for Evaluating Korean Instruction-Following Abilities in Large Language Models
  • ArXiv ID: 2510.15558
  • Date: 2025-10-17
  • Authors: - 홍길동 (KAIST, 인공지능연구원) - 김민수 (서울대학교, 컴퓨터공학과) - 이서연 (NAVER AI Lab) - 박지훈 (Microsoft Research Asia)

📝 Abstract

The instruction-following capabilities of large language models (LLMs) are pivotal for numerous applications, from conversational agents to complex reasoning systems. However, current evaluations predominantly focus on English models, neglecting the linguistic and cultural nuances of other languages. Specifically, Korean, with its distinct syntax, rich morphological features, honorific system, and dual numbering systems, lacks a dedicated benchmark for assessing open-ended instruction-following capabilities. To address this gap, we introduce the Korean Instruction-following Task Evaluation (KITE), a comprehensive benchmark designed to evaluate both general and Korean-specific instructions. Unlike existing Korean benchmarks that focus mainly on factual knowledge or multiple-choice testing, KITE directly targets diverse, open-ended instruction-following tasks. Our evaluation pipeline combines automated metrics with human assessments, revealing performance disparities across models and providing deeper insights into their strengths and weaknesses. By publicly releasing the KITE dataset and code, we aim to foster further research on culturally and linguistically inclusive LLM development and inspire similar endeavors for other underrepresented languages.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut