Playbooks

The Running NeMo Framework on DGX Cloud playbook focuses on preparing a dataset and pre-training a foundational model with NeMo Framework on DGX Cloud. The playbook covers essential aspects of DGX Cloud, such as uploading containers, creating and mounting workspaces, launching jobs, and pre-training a model.

The Running NeMo Framework on Kubernetes playbook demonstrates deploying and managing NeMo using Kubernetes. The playbook covers cluster setup, NeMo Framework installation, data preparation, and model training.

The Foundation Model Pre-training using NeMo Framework playbook focuses on successfully launching a foundation model pretraining job on your infrastructure and getting the necessary training artifacts as the output of the successful runs. It demonstrates executing the workflow of pretraining Foundation Models (GPT3, T5, mT5, Bert) using NeMo Framework and the Pile dataset, producing checkpoints, logs, and event files.

The NeMo Framework AutoConfigurator playbook demonstrates how to use NeMo Framework Autoconfigurator to determine the optimal model size for given compute and training budget. Then, it shows how to produce optimal foundation model pretraining and inference configurations to achieve the highest throughput runs. It specifically focuses on automating the configuration process for NeMo, such as autoconfiguration, parameter tuning, and optimization to streamline NeMo’s setup.

The NeMo Framework Supervised fine-tuning (SFT) with Llama2 playbook shows how to finetune Llama2 models of various sizes using SFT against the dolly-15k dataset. It demonstrates data preprocessing, training, validation, testing, and running the finetuning scripts included in NeMo Framework.

The NeMo Framework PEFT playbook shows how to finetune Llama2 and Mixtral-8x7B models of various sizes using PEFT against the PubMedQA dataset. It demonstrates data preprocessing, training, validation, testing, and running the finetuning scripts included in NeMo Framework, and it shows how to run inference against the finetuned model.

The NeMo Framework Mixtral8x7B SFT playbook shows how to finetune Mixtral8x7B using SFT against the dolly-15k dataset. It demonstrates data preprocessing, training, validation, testing, and running the finetuning scripts included in NeMo Framework.

The NeMo Framework Mistral SFT playbook shows how to finetune the Mistral-7B model using SFT against the dolly-15k dataset. It demonstrates data preprocessing, training, validation, testing, and running the finetuning scripts included in NeMo Framework.

The NeMo Framework Mistral PEFT playbook shows how to finetune the Mistral-7B model using PEFT against the PubMedQA dataset. It demonstrates data preprocessing, training, validation, testing, and running the finetuning scripts included in NeMo Framework, and it shows how to run inference against the finetuned model.

Previous Getting Started
Next Running NeMo Framework on DGX Cloud
© Copyright 2023-2024, NVIDIA. Last updated on Apr 12, 2024.