(available api-version 0.7.0 and above)

List of inference models required by an operator.

Listed models will be loaded by Triton Inference Server and made available to the requesting operator(s) prior to the start of any pipeline job requiring them.


type: string

Name of an inference model, preloaded and available from Triton Inference Server.

Model names function as both the unique identifier of the inference model, its file-system name, and the name by which the model can be referenced by when using Triton Inference Client.

© Copyright 2018-2021, NVIDIA Corporation. All rights reserved. Last updated on Feb 1, 2023.