Framework Inference

For text-to-image models, the inference script generates images from text prompts defined in the config file.

To enable the inference stage with Stable Diffusion, configure the configuration files:

  1. In the defaults section of conf/config.yaml, update the fw_inference field to point to the desired Stable Diffusion inference configuration file. For example, if you want to use the stable_diffusion/text2img.yaml configuration, change the fw_inference field to stable_diffusion/text2img.

    Copy
    Copied!
                

    defaults: - fw_inference: stable_diffusion/text2img ...

  2. In the stages field of conf/config.yaml, make sure the fw_inference stage is included. For example,

    Copy
    Copied!
                

    stages: - fw_inference ...

  3. Configure prompts and num_images_per_prompt fields of conf/fw_inference/stable_diffusion/text2img.yaml. Set model.restore_from_path to the .nemo ckpt you want generate images with.

Remarks:

We have supported three types of inference samplers, ‘DDIM’, ‘PLMS’ and ‘DPM’, which can be changed by from config files. ‘DPM’ sampler is added in recent updates and able to achieve similar image quality with half of steps needed for inference.

Previous Training with Predefined Configurations
Next Model Export to TensorRT-LLM
© Copyright 2023-2024, NVIDIA. Last updated on May 17, 2024.