MOSS: Efficient and Accurate FP8 LLM Training with Microscaling and Automatic Scaling
Reading time: 2 minute
...
📝 Original Info
- Title: MOSS: Efficient and Accurate FP8 LLM Training with Microscaling and Automatic Scaling
- ArXiv ID: 2511.05811
- Date: 2025-11-08
- Authors: ** 정보 없음 (논문에 명시된 저자 정보가 제공되지 않았습니다.) **
📝 Abstract
Training large language models with FP8 formats offers significant efficiency gains. However, the reduced numerical precision of FP8 poses challenges for stable and accurate training. Current frameworks preserve training performance using mixed-granularity quantization, i.e., applying per-group quantization for activations and per-tensor/block quantization for weights. While effective, per-group quantization requires scaling along the inner dimension of matrix multiplication, introducing additional dequantization overhead. Moreover, these frameworks often rely on just-in-time scaling to dynamically adjust scaling factors based on the current data distribution. However, this online quantization is inefficient for FP8 training, as it involves multiple memory reads and writes that negate the performance benefits of FP8. To overcome these limitations, we propose MOSS, a novel FP8 training framework that ensures both efficiency and numerical stability. MOSS introduces two key innovations: (1) a two-level microscaling strategy for quantizing sensitive activations, which balances precision and dequantization cost by combining a high-precision global scale with compact, power-of-two local scales; and (2) automatic scaling for weights in linear layers, which eliminates the need for costly max-reduction operations by predicting and adjusting scaling factors during training. Leveraging these techniques, MOSS enables efficient FP8 training of a 7B parameter model, achieving performance comparable to the BF16 baseline while achieving up to 34% higher training throughput.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.