Transformer Engine documentation¶
Transformer Engine (TE) is a library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper GPUs, to provide better performance with lower memory utilization in both training and inference. TE provides a collection of highly optimized building blocks for popular Transformer architectures and an automatic mixed precision-like API that can be used seamlessly with your PyTorch code. TE also includes a framework agnostic C++ API that can be integrated with other deep learning libraries to enable FP8 support for Transformers.
As the number of parameters in Transformer models continues to grow, training and inference for architectures such as BERT, GPT and T5 becomes very memory and compute intensive. Most deep learning frameworks train with FP32 by default. This is not essential, however, to achieve full accuracy for many deep learning models. Using mixed-precision training, which combines single-precision (FP32) with lower precision (e.g. FP16) format when training a model, results in significant speedups with minimal differences in accuracy as compared to FP32 training. With the introduction of Hopper GPU architecture FP8 precision was introduced, which offers improved performance over FP16 with no degradation in accuracy. Although all major deep learning frameworks support FP16, FP8 support is not available today.
TE addresses the problem of FP8 support by providing APIs that integrate with popular Large Language Model (LLM) libraries. It provides python layer (initially supporting pyTorch, with support for more frameworks in the future) consisting of modules to easily build Transformer layer as well as framework agnostic library in C++ including structs and kernels needed for FP8 support. Modules provided by TE internally maintain scaling factors and other values needed for FP8 training, greatly simplifying for the users.
Transformer Engine in action:
import torch import transformer_engine.pytorch as te from transformer_engine.common import recipe # Set dimensions. in_features = 768 out_features = 3072 hidden_size = 2048 # Initialize model and inputs. model = te.Linear(in_features, out_features, bias=True) inp = torch.randn(hidden_size, in_features, device="cuda") # Create FP8 recipe. Note: All input args are optional. fp8_recipe = recipe.DelayedScaling(margin=0, interval=1, fp8_format=recipe.Format.E4M3) # Enables autocasting for the forward pass with te.fp8_autocast(enabled=True, fp8_recipe=fp8_recipe): out = model(inp) loss = out.sum() loss.backward()
Easy-to-use pyTorch modules enabling building of the Transformer layers with FP8 support on H100 GPUs.
Optimizations (e.g. fused kernels) for Transformer models across all precisions and NVIDIA GPU architecures.