Important
You are viewing the NeMo 2.0 documentation. This release introduces significant changes to the API and a new library, NeMo Run. We are currently porting all features from NeMo 1.0 to 2.0. For documentation on previous versions or features not yet available in 2.0, please refer to the NeMo 24.07 documentation.
Model Export to TensorRT-LLM#
To enable the export stage with a NeVa model, configure the configuration files:
In the
defaults
section ofconf/config.yaml
, update theexport
field to point to the desired NeVa configuration file. For example, if you want to use theneva/export_neva
configuration, change theexport
field toneva/export_neva
.defaults: - export: neva/export_neva ...
In the
stages
field ofconf/config.yaml
, make sure theexport
stage is included. For example,stages: - export ...
Configure
infer.max_input_len
andinfer.max_output_len
of theconf/export/neva/export_neva.yaml
file to set the max_input_len and max_output_len to use for NVIDIA TensorRT-LLM model.
Remarks:
To load a pretrained checkpoint for inference, set the
restore_from_path
field in themodel
section to the path of the pretrained checkpoint in.nemo
format inconf/export/neve/export_neva.yaml
.Only
max_batch_size: 1
is supported for now.