Tutorials for NeMo Framework#

Before You Start#

Before starting the tutorials, ensure you have:

  • NeMo Framework Container: Running the latest NeMo Framework container

  • Model Checkpoint: Access to a Megatron Bridge checkpoint (tutorials use Llama 3.2 1B Instruct converted from a Hugging Face format).

  • GPU Resources: CUDA-compatible GPU with sufficient memory

  • Jupyter Environment: Ability to run Jupyter notebooks


Available Tutorials#

Build your expertise with these progressive tutorials:

Orchestrating evaluations with NeMo Run

Launch deployment and evaluation jobs using NeMo Run.

Run Evaluations with NeMo Run
Basic evaluation with MMLU Evaluation

Deploy models and run evaluations with the MMLU benchmark for both completions and chat endpoints.

https://github.com/NVIDIA-NeMo/Eval/tree/main/tutorials/mmlu.ipynb
Enable additional evaluation harnesses

Discover how to extend evaluation capabilities by installing additional harnesses and running HumanEval coding assessments.

https://github.com/NVIDIA-NeMo/Eval/tree/main/tutorials/simple-evals.ipynb
Configure custom tasks

Master custom evaluation workflows by running WikiText benchmark with advanced configuration and log-probability analysis.

https://github.com/NVIDIA-NeMo/Eval/tree/main/tutorials/wikitext.ipynb

Run the Notebook Tutorials#

  1. Start NeMo Framework Container:

    # set your Hugging Face token for access to gated datasets and checkpoints
    export HF_TOKEN=hf_...
    docker run --rm -it -w /workdir -v $(pwd):/workdir \
      -e HF_TOKEN \
      --entrypoint bash --gpus all \
      nvcr.io/nvidia/nemo:${TAG}
    
  2. Launch Jupyter:

    jupyter lab --ip=0.0.0.0 --port=8888 --allow-root
    
  3. Navigate to the tutorials/ directory and open the desired notebook