FP8-Flow-MoE: A Casting-Free FP8 Recipe without Double Quantization Error
Reading time: 1 minute
...
📝 Original Info
- Title: FP8-Flow-MoE: A Casting-Free FP8 Recipe without Double Quantization Error
- ArXiv ID: 2511.02302
- Date: 2025-11-04
- Authors: 정보가 제공되지 않음 (논문에 저자 정보가 명시되지 않음)
📝 Abstract
Training large Mixture-of-Experts (MoE) models remains computationally prohibitive due to their extreme compute and memory demands. Although low-precision training promises to accelerate computation and reduce memory footprint, existing implementations still rely on BF16-dominated dataflows with frequent quantize-dequantize (Q/DQ) conversions. These redundant casts erode much of FP8's theoretical efficiency. However, naively removing these casts by keeping dataflows entirely in FP8 introduces double quantization error: tensors quantized along different dimensions accumulate inconsistent scaling factors, degrading numerical stability. We propose FP8-Flow-MoE, an FP8 training recipe featuring a quantization-consistent FP8-centric dataflow with a scaling-aware transpose and fused FP8 operators that streamline computation and eliminate explicit cast operations from 12 to 2. Evaluations on a 671B-parameter MoE model demonstrate up to 21\% higher throughput and 16.5 GB lower memory usage per GPU compared to BF16 and naïve FP8 baselines, while maintaining stable convergence. We provide a plug-and-play FP8 recipe compatible with TransformerEngine and Megatron-LM, which will be open-sourced soon.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.