Important
NeMo 2.0 is an experimental feature and currently released in the dev container only: nvcr.io/nvidia/nemo:dev. Please refer to NeMo 2.0 overview for information on getting started.
Performance
Training Quality Results
InstructPix2Pix is an image editing tool that transforms original images based on user instructions. For example, when provided with a photo of Toy Jensen, the AI can seamlessly edit the image according to your creative vision.
Here are some examples generated using our NeMo Stable Diffusion 1.2 model, fine-tuned with NeMo InstructPix2Pix. For each instruction, we showcase 8 distinct images generated from different seeds:
Inference Performance Results
Latency times are started directly before the text encoding (CLIP) and stopped directly after the output image decoding (VAE). For framework we use the Torch Automated Mixed Precision (AMP) for FP16 computation. For TRT, we export the various models with the FP16 acceleration. We use the optimized TRT engine setup present in the deployment directory to get the numbers in the same environment as the framework.
GPU: NVIDIA DGX A100 (1x A100 80 GB)
Batch Size: Synonymous with num_images_per_prompt
Model |
Batch Size |
Sampler |
Inference Steps |
TRT FP 16 Latency (s) |
FW FP 16 (AMP) Latency (s) |
TRT vs FW Speedup (x) |
---|---|---|---|---|---|---|
InstructPix2Pix (Res=256) |
1 |
N/A |
100 |
1.0 |
3.6 |
3.6 |
InstructPix2Pix (Res=256) |
2 |
N/A |
100 |
1.3 |
3.7 |
2.8 |
InstructPix2Pix (Res=256) |
4 |
N/A |
100 |
2.2 |
4.9 |
2.2 |