Convert PyTorch trained network

To convert your PyTorch trained models for AIAA, you need to first get and start the Nvidia PyTorch container.

Note that in this release we are using Triton 20.08 so we need to use 20.08 PyTorch container.

Copy
Copied!
            

docker run nvcr.io/nvidia/pytorch:20.08-py3

Then trace and save your model to a TorchScript file (follow: PyTorch website).

Following is an example code to convert a U-net network:

Copy
Copied!
            

import torch # An instance of your model. model = torch.hub.load('mateuszbuda/brain-segmentation-pytorch', 'unet', in_channels=3, out_channels=1, init_features=32, pretrained=True) # An example input you would normally provide to your model's forward() method. # Download an example image import urllib url, filename = ("https://github.com/mateuszbuda/brain-segmentation-pytorch/raw/master/assets/TCGA_CS_4944.png", "TCGA_CS_4944.png") try: urllib.URLopener().retrieve(url, filename) except: urllib.request.urlretrieve(url, filename) # pass that example to our model import numpy as np from PIL import Image from torchvision import transforms input_image = Image.open(filename) m, s = np.mean(input_image, axis=(0, 1)), np.std(input_image, axis=(0, 1)) preprocess = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=m, std=s), ]) input_tensor = preprocess(input_image) input_batch = input_tensor.unsqueeze(0) # Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing. traced_script_module = torch.jit.trace(model, input_batch) # Then saved your ScriptModule to file traced_script_module.save("unet.pt")

Note

These python codes are adapted from PyTorch website and U-net for brain MRI.

© Copyright 2020, NVIDIA. Last updated on Feb 2, 2023.