User Guide (Latest Version)

The NVIDIA NeMo framework accelerates the entire AI workflow end-to-end, from data preparation to model training to inference. It provides optimal performance for training advanced generative AI models by incorporating the most recent training techniques, such as model parallelization, optimized attention mechanisms, and more, to achieve high training throughput. For inference, the NeMo framework provides a path that leverages TensorRT-LLM, a specialized library for accelerating and optimizing LLM inference on NVIDIA GPUs.

In this section, you will find training accuracy (loss curves, and academic benchmarks), training performance (throughput), and inference performance (latency, throughput) results for various models as listed below.

© | | | | | | |. Last updated on Jun 19, 2024.