Convert PyTorch trained network
If you use PyTorch to train models directly, the model needs to be converted into TorchScript format to be used in AIAA.
You need to first get and start the Nvidia PyTorch container.
Note that in this release we are using Triton 21.02 so we need to use 21.02 PyTorch container.
docker run --gpus=1 -it --rm nvcr.io/nvidia/pytorch:21.02-py3
Then trace and save your model to a TorchScript file (follow: PyTorch website).
Following is an example code to convert a U-Net network:
import torch
import urllib
import numpy as np
from PIL import Image
from torchvision import transforms
# An instance of your model.
model = torch.hub.load('mateuszbuda/brain-segmentation-pytorch', 'unet',
in_channels=3, out_channels=1, init_features=32, pretrained=True)
# An example input you would normally provide to your model's forward() method.
# Download an example image
url, filename = ("https://github.com/mateuszbuda/brain-segmentation-pytorch/raw/master/assets/TCGA_CS_4944.png",
"TCGA_CS_4944.png")
urllib.request.urlretrieve(url, filename)
# pass that example to the model
input_image = Image.open(filename)
m, s = np.mean(input_image, axis=(0, 1)), np.std(input_image, axis=(0, 1))
preprocess = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=m, std=s),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0)
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model = model.to('cuda')
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, input_batch)
# Then saved your ScriptModule to file
traced_script_module.save("unet.ts")
These python codes are adapted from PyTorch website and U-net for brain MRI.