Container-Based Function Creation
Container-Based Function Creation
Container-based functions require building and pushing a Cloud Functions compatible Docker container image to your container registry.
Resources
-
Example containers can be found in the examples repository.
-
The repository also contains helper functions that are useful when authoring your container, including:
- Helpers that parse Cloud Functions-specific parameters on invocation
- Helpers that can be used to instrument your container with Cloud Functions compatible logs
-
It’s always a best practice to emit logs from your inference container. Cloud Functions supports third-party logging and metrics emission from your container.
Please note that container functions should not run as root user, running as root is not formally supported on any Cloud Functions backend.
Container Endpoints
Any server can be implemented within the container, as long as it implements the following:
- For HTTP-based functions, a health check endpoint that returns a 200 HTTP Status Code on success.
- For gRPC-based functions, a standard gRPC health check. See these docs for more info also gRPC Health Checking.
- An inference endpoint (this endpoint will be called during function invocation)
These endpoints are expected to be served on the same port, defined as the inferencePort.
Cloud Functions reserves the following ports on your container for internal monitoring and metrics:
- Port
8080 - Port
8010
Cloud Functions also expects the following directories in the container to remain read-only for caching purposes:
/config/directory- Nested directories created inside
/config/
Composing a FastAPI Container
It’s possible to use any container with Cloud Functions as long as it implements a server with the above endpoints. The below is an example of a FastAPI-based container compatible with Cloud Functions. Clone the FastAPI echo example.
Create the “requirements.txt” File
Implement the Server
Note in the example above, the function’s configuration during creation will be:
- Inference Protocol: HTTP
- Inference Endpoint:
/echo - Health Endpoint:
/health - Inference Port (also used for health check):
8000
Create the Dockerfile
Build the Container & Create the Function
See the [Create the Function] section below for the remaining steps.
Composing a PyTriton Container
NVIDIA’s PyTriton is a Python native solution of Triton inference server. A minimum version of 0.3.0 is required.
Create the “requirements.txt” File
- This file should list the Python dependencies required for your model.
- Add nvidia-pytriton to your
requirements.txtfile.
Here is an example of a requirements.txt file:
Create the “run.py” File
- Your
run.pyfile (or similar Python file) needs to define a PyTriton model. - This involves importing your model dependencies, creating a PyTritonServer class with an
__init__function, an_infer_fnfunction and arunfunction that serves the inference_function, defining the model name, the inputs and the outputs along with optional configuration.
Here is an example of a run.py file:
Create the “Dockerfile”
- Create a file named
Dockerfilein your model directory. - It’s strongly recommended to use NVIDIA-optimized containers like CUDA, Pytorch or TensorRT as your base container. They can be downloaded from the NGC Catalog.
- Make sure to install your Python requirements in your
Dockerfile. - Copy in your model source code, and model weights.
Here is an example of a Dockerfile:
Build the Docker Image
- Open a terminal or command prompt.
- Navigate to the
my_modeldirectory. - Run the following command to build the docker image:
Replace my_model_image with the desired name for your docker image.
Push the Docker Image
Tag and push the docker image to your container registry.
Create the Function
Create the function via the NVCF API. In this example, we defined the inference port as 8000 and are using the default inference and health endpoint paths.
Additional Examples
See more examples of containers that are Cloud Functions compatible in the function samples directory.
Creating gRPC-based Functions
Cloud Functions supports function invocation via gRPC. During function creation, specify that the function is a gRPC function by setting the inferenceUrl field to /grpc.
Prerequisites
-
The function container must implement a gRPC port, endpoint and health check. The health check is expected to be served by the gRPC inference port, there is no need to define a separate health endpoint path.
- See gRPC health checking.
- See an example container with a gRPC server that is Cloud Functions compatible.
gRPC Function Creation via API
When creating the gRPC function, set the inferenceUrl field to /grpc:
gRPC Function Invocation
gRPC function invocation uses the same Authorization: Bearer $NVCF_TOKEN header as HTTP invocation, passed as gRPC metadata. See the gRPC invocation examples for details on how to authenticate and invoke your gRPC function.