For ControlNet, the export script generates four different optimized inference models. The first model is the VAE Decoder, the second model is the UNet, the third model is the CLIP Encoder and the fourth is the control model.
In the
defaults
section ofconf/config.yaml
, update theexport
field to point to the desired ControlNet inference configuration file. For example, if you want to use thecontrolnet/export_controlnet.yaml
configuration, change theexport
field tocontrolnet/export_controlnet
.defaults: - export: controlnet/export_controlnet ...
In the
stages
field ofconf/config.yaml
, make sure theexport
stage is included. For example,stages: - export ...
Configure
infer.num_images_per_prompt
of theconf/export/controlnet/export_controlnet.yaml
file to set the batch_size to use for the ONNX and NVIDIA TensorRT models.
Remarks:
To load a pretrained checkpoint for inference, set the
restore_from_path
field in themodel
section to the path of the pretrained checkpoint in.nemo
format inconf/export/controlnet/export_controlnet.yaml
.Only
num_images_per_prompt: 1
is supported for now.