Explains key terms and concepts related to Cloud Functions.



Function A inference request that can contain one or more models and will be executed on the inference container.
Asset A file that can be uploaded, downloaded, and used when inferencing.
Registry A central management point for docker containers/models that store and retrieve containers/models.
NGC (NVIDIA GPU Cloud) A portal of enterprise services, software, and management tools supporting end-to-end AI.
NGC ID The organization identifier.
NGC CLI The command-line interfaces for managing content and services within the NGC.
AK (API Keys) Used for authentication for machine clients as well as for human clients that cannot use Service Account.
SA (Service Accounts) Used for authentication for machine clients with a provides Oauth2 implementation with keys that are forced to expire.
Triton Inference Is an open-source inference serving software that standardizes model deployment and execution.
NVIDIA Account ID Is the customer billing entity that cloud services at NVIDIA are associated with.
Cloud Functions A server-less API platform that executes models & containers when invoked and automatically manages the underlying GPU resources across multiple regions & clouds
NGC Private Registry Provides a secure space to store and share custom containers, models, Jupyter notebooks, and Helm charts within your enterprise.
BLS (Business Logic Scripting) In Triton enables complex operational logic within model pipelines, allowing custom script integration for loops, conditionals, and control-flows during model execution.
BYOC (Bring Your Own Cluster) A specific way of deploying on an existing cluster maintained by the end user, rather than a cluster managed by NVCF.
Cluster A collection of GPU powered nodes/pods.
Cluster Group (Backend) A collection of Clusters.
Instance Type Each GPU type can support one or more Instance Types with varying configuration such as number of CPU cores, etc.
Previous FAQs
© Copyright 2023-2024, NVIDIA. Last updated on Feb 16, 2024.