Pretraining Large Language Models with NVFP4
arxiv.org
external-link
Large Language Models (LLMs) today are powerful problem solvers across many domains, and they continue to get stronger as they scale in model size, training set size, and training set quality, as shown by extensive research and experimentation across the industry. Training a frontier model today requires on the order of tens to hundreds of yottaflops, which is a massive investment of time, compute, and energy. Improving pretraining efficiency is therefore essential to enable the next generation of even more capable LLMs. While 8-bit floating point (FP8) training is now widely adopted, transitioning to even narrower precision, such as 4-bit floating point (FP4), could unlock additional improvements in computational speed and resource utilization. However, quantization at this level poses challenges to training stability, convergence, and implementation, notably for large-scale models trained on long token horizons. In this study, we introduce a novel approach for stable and accurate training of large language models (LLMs) using the NVFP4 format. Our method integrates Random Hadamard transforms (RHT) to bound block-level outliers, employs a two-dimensional quantization scheme for consistent representations across both the forward and backward passes, utilizes stochastic rounding for unbiased gradient estimation, and incorporates selective high-precision layers. We validate our approach by training a 12-billion-parameter model on 10 trillion tokens -- the longest publicly documented training run in 4-bit precision to date. Our results show that the model trained with our NVFP4-based pretraining technique achieves training loss and downstream task accuracies comparable to an FP8 baseline. These findings highlight that NVFP4, when combined with our training approach, represents a major step forward in narrow-precision LLM training algorithms.

NVIDIA just trained a 12B-parameter language model on 10 trillion tokens entirely in 4-bit precision.

Here’s why this matters:

  • NVFP4 delivers 2–3× faster math throughput and 50% less memory vs FP8
  • Accuracy? Practically identical. (MMLU-Pro: FP8 = 62.62%, NVFP4 = 62.58%)
  • Stability issues have been solved using Random Hadamard transforms, stochastic rounding, and 2D scaling

This is the first successful demonstration of large-scale 4-bit pretraining without losing accuracy.

The next generation of frontier models will be faster, cheaper, without compromise.

@[email protected]
link
fedilink
1
edit-2
4d

next generation of frontier models

lol. Too much grifter speak for me. Slow down on that kool aid.

☆ Yσɠƚԋσʂ ☆
creator
link
fedilink
24d

People building their whole identity around hating LLM tech will never stop being hilarious.

@[email protected]
link
fedilink
English
213h

iTs jUsT a PaTtErN mAcHiNe

Create a post

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

  • 1 user online
  • 21 users / day
  • 76 users / week
  • 332 users / month
  • 1.46K users / 6 months
  • 1 subscriber
  • 4.23K Posts
  • 49.1K Comments
  • Modlog