Correlation Dimension of Auto-Regressive Large Language Models

Reading time: 2 minute
...

📝 Original Info

  • Title: Correlation Dimension of Auto-Regressive Large Language Models
  • ArXiv ID: 2510.21258
  • Date: 2025-10-24
  • Authors: ** - 제공된 정보에 저자 명단이 포함되지 않았습니다. (정보 없음) **

📝 Abstract

Large language models (LLMs) have achieved remarkable progress in natural language generation, yet they continue to display puzzling behaviors -- such as repetition and incoherence -- even when exhibiting low perplexity. This highlights a key limitation of conventional evaluation metrics, which emphasize local prediction accuracy while overlooking long-range structural complexity. We introduce correlation dimension, a fractal-geometric measure of self-similarity, to quantify the epistemological complexity of text as perceived by a language model. This measure captures the hierarchical recurrence structure of language, bridging local and global properties in a unified framework. Through extensive experiments, we show that correlation dimension (1) reveals three distinct phases during pretraining, (2) reflects context-dependent complexity, (3) indicates a model's tendency toward hallucination, and (4) reliably detects multiple forms of degeneration in generated text. The method is computationally efficient, robust to model quantization (down to 4-bit precision), broadly applicable across autoregressive architectures (e.g., Transformer and Mamba), and provides fresh insight into the generative dynamics of LLMs.

💡 Deep Analysis

Figure 1

📄 Full Content

📸 Image Gallery

context-length-corrdim.png context-length-corrint.png delay.png lin-tegmark-corrdim.png lin-tegmark.png modelsize.png newton-philosophy-rq.png newton-philosophy-skip-sent.png newton-philosophy-skips.png newton-philosophy.png selforg.png sep-corrdim-compare.png training-collapse.png training.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut