Large Language Models (LLMs) training is prohibitively expensive, driving interest in low-precision fully-quantized training (FQT). While novel 4-bit formats like NVFP4 offer substantial efficiency gains, achieving near-lossless training at such low precision remains challenging. We introduce TetraJet-v2, an end-to-end 4-bit FQT method that leverages NVFP4 for activations, weights, and gradients in all linear layers. We identify two critical issues hindering low-precision LLM training: weight oscillation and outliers. To address these, we propose: 1) an unbiased double-block quantization method for NVFP4 linear layers, 2) OsciReset, an algorithm to suppress weight oscillation, and 3) OutControl, an algorithm to retain outlier accuracy. TetraJet-v2 consistently outperforms prior FP4 training methods on pre-training LLMs across varying model sizes up to 370M and data sizes up to 200B tokens, reducing the performance gap to full-precision training by an average of 51.3%.
翻译:大语言模型(LLMs)的训练成本极高,这推动了对低精度全量化训练(FQT)的研究兴趣。尽管新型4位格式(如NVFP4)能带来显著的效率提升,但在如此低精度下实现近乎无损的训练仍具挑战性。本文提出TetraJet-v2,一种端到端的4位FQT方法,在所有线性层中对激活值、权重和梯度均采用NVFP4格式。我们识别出阻碍低精度LLM训练的两个关键问题:权重振荡和异常值。为解决这些问题,我们提出:1)针对NVFP4线性层的无偏双块量化方法;2)OsciReset算法,用于抑制权重振荡;3)OutControl算法,以保持异常值的精度。在模型规模高达3.7亿参数、数据规模高达2000亿token的LLM预训练任务中,TetraJet-v2始终优于先前的FP4训练方法,将性能与全精度训练的差距平均缩小了51.3%。