NeMo Framework contains scripts and code to provide end-to-end data preparation and training for large transformer based models.
Get access to the training and inference containers automatically at https://developer.nvidia.com/nemo-framework.
- NeMo Framework User Guide
- Playbooks
- Playbooks
- Running NeMo Framework on DGX Cloud
- Running NeMo Framework on Kubernetes
- Foundation Model Pre-training using NeMo Framework
- NeMo Framework AutoConfigurator
- NeMo Framework Supervised fine-tuning (SFT) with Llama2
- NeMo Framework PEFT with Llama2
- Model Overview
- Feature Matrix
- Software Component Versions
- Cloud Service Providers
- Model Guide
- Model Guide
- Training NeMo Framework Models
- Training with Predefined Configurations
- Using AutoConfigurator to Find the Optimal Configuration
- Training with Custom Configurations
- Bring Your Own Dataset
- Model Training
- Resuming Training with a Different Number of Nodes
- Checkpoint Conversion
- Generalized PEFT Framework
- Model Fine-Tuning
- Model Prompt Learning
- Model Adapter Learning and IA3 Learning
- Model Evaluation
- Exporting the NeMo Models to TensorRT-LLM
- NeMo Data Curator
- Coverage
- General Usage
- Module-specific documentation
- NeMo Data Curator
- Downloading and extracting text
- Document filtering
- Text cleaning and language separation
- Exact and fuzzy deduplication
- Classifier and heuristic-based quality filtering
- 1 Downstream task decontamination (task deduplication)
prepare_task_data
- 3 Find the matching task N-grams within the training documents
- 4 Remove matching N-grams above a user-defined threshold.
- Model Alignment
- Model Alignment by RLHF
- Model Alignment by SteerLM Method
- SteerLM
- SteerLM vs RLHF
- Train a SteerLM model
- Step 1: Download Llama 2 LLM model
- Step 2: Download and Preprocess data for Attribute Prediction Modelling
- Step 3: Train the regression reward model on OASST+HelpSteer data
- Step 4: Generate annotations
- Step 5: Train the Attribute-Conditioned SFT model
- Step 6: Inference
- SteerLM: Novel Technique for Simple and Controllable Model Alignment
- Model Alignment by Direct Preference Optimisation (DPO)
- Model Alignment by RLHF
- Llama
- Deploying the NeMo Models in the NeMo Framework Inference Container
- Performance
- Changelog
- NeMo Framework 23.11
- NeMo Framework 23.10
- NeMo Framework 23.08.03
- NeMo Framework 23.08.02
- NeMo Framework 23.08.01
- NeMo Framework 23.08
- NeMo Framework 23.07
- NeMo Framework 23.05
- NeMo Framework 23.04.1
- NeMo Framework 23.04
- NeMo Framework 23.03
- NeMo Framework 23.01
- NeMo Framework 22.11
- NeMo Framework 22.09
- NeMo Framework 22.08.01
- NeMo Framework 22.08
- NeMo Framework 22.06-hotfix.01
- NeMo Framework 22.06
- NeMo Framework 22.05.01
- NeMo Framework 22.05
- NeMo Framework 22.04
- NeMo Framework 22.03
- NeMo Framework 22.02
- NeMo Framework 22.01
- Known Issues