Training LLMs Beyond Next Token Prediction -- Filling the Mutual Information Gap

Reading time: 1 minute
...

📝 Original Info

  • Title: Training LLMs Beyond Next Token Prediction – Filling the Mutual Information Gap
  • ArXiv ID: 2511.00198
  • Date: 2025-10-31
  • Authors: ** 저자 정보가 제공되지 않았습니다. 논문 원문 또는 해당 학술지에서 확인해 주세요. **

📝 Abstract

Optimizing training performance in large language models (LLMs) remains an essential challenge, particularly in improving model performance while maintaining computational costs. This work challenges the conventional approach of training LLMs using next-token prediction (NTP), arguing that by predicting information-rich tokens during training, there is a more effective way to train LLMs. We investigate the impact of the proposed solution in three kinds of tasks for LLMs: arithmetic, multi-label classification of text, and natural-language generation. This work offers a principled approach to optimizing LLM training, advancing both model performance and theoretical understanding of the target-token selection strategies.

💡 Deep Analysis

📄 Full Content

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut