Large Language Models (1.0.0)
Large Language Models (1.0.0)

Release Notes

Summary

This is the first general release of NIM.

Language Models

  • Llama 3 8B Instruct

  • Llama 3 70B Instruct

  • Mistral-7B-Instruct-v0.3

  • Mixtral-8x7B-v0.1

  • Mixtral-8x22B-v0.1

Known Issues

P-Tuning is not supported.

Empty metrics values on multi-GPU TensorRT-LLM model Metrics items gpu_cache_usage_perc, num_requests_running, and num_requests_waiting will not be reported for multi-GPU TensorRT-LLM model, because TensorRT-LLM currently doesn’t expose iteration statistics in orchestrator mode.

No tokenizer found error when running PEFT This warning can be safely ignored.

Previous Introduction
Next Getting Started
© Copyright © 2024, NVIDIA Corporation. Last updated on Jul 22, 2024.