Route Optimization AI Workflow (Latest Version)

Step 3: Install Workflow Components

Route Optimization

Ensure that the previous Step 2: Set Up Required Infrastructure section has been completed prior to proceeding with the deployment steps below.

As a part of the workflow, we will be demonstrating how to deploy the packaged workflow components as a Helm chart on the previously described Kubernetes-based platform.

We will then demonstrate an example of what the data and cost matrix looks like, and how a client might interact with the cuOpt microservice, using Envoy set up as a proxy to authenticate and authorize requests sent to cuOpt, and Keycloak as the OIDC identity provider. For more information about the authentication portion of the workflow, refer to the Authentication section in the Appendix.

Prior to deploying the workflow components, ensure that you have set up Keycloak as required for the workflow, following the instructions in the Appendix.

Note

Make sure to note down the six fields specified within the Keycloak Configuration section, you will use this in the next step.

Once Keycloak has been configured, run the following command on your system via the SSH console to get the access token (replace the TOKEN_ENDPOINT, CLIENT_ID, CLIENT_SECRET, USERNAME and PASSWORD fields with the values previously created).

Copy
Copied!
            

curl -k -L -X POST '<TOKEN_ENDPOINT>' -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=<CLIENT_ID>' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=<CLIENT_SECRET>' --data-urlencode 'scope=openid' --data-urlencode 'username=<USERNAME>' --data-urlencode 'password=<PASSWORD>' | json_pp

For example:

Copy
Copied!
            

curl -k -L -X POST 'https://auth.your-cluster.your-domain.com/realms/ai-workflows/protocol/openid-connect/token' -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=merlin-workflow-client' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=vihhgVP76TgA4qDL3c5jUFAN1gixWYT8' --data-urlencode 'scope=openid' --data-urlencode 'username=nvidia' --data-urlencode 'password=hello123' | json_pp

This will output a JSON string like below

Copy
Copied!
            

{"access_token":"eyJhbGc...","expires_in":54000,"refresh_expires_in":108000,"refresh_token":"eyJhbGci...","not-before-policy":0,"session_state":"e7e23016-2307-4290-af45-2c79ee79d0a1","scope":"openid email profile"}

Note down the access_token, this field will be required later on in the workflow, within the Jupyter notebook.

Create an environment variable for the namespace to organize the K8s cluster deployed via the Cloud Native Stack and to logically separate workflow-related deployments from others using the following command:

Copy
Copied!
            

export NAMESPACE="cuopt"

Using your NGC API Key created previously, create an environment variable for this key using the following command:

Copy
Copied!
            

export API_KEY="<your NGC API key>"

Using the Keycloak realm created during the Keycloak configuration, create an environment variable using the following command:

Copy
Copied!
            

export KEYCLOAK_REALM="<keycloak realm>"

The route optimization workflow is packaged as a Helm chart, with a series of subcharts to deploy the various components required for the workflow.

To deploy the workflow, first fetch the helm chart from NGC using the following command:

Copy
Copied!
            

helm fetch https://helm.ngc.nvidia.com/j9mrpofbmtxd/cuopt_workflows/charts/route-optimization-0.1.0.tgz --username='$oauthtoken' --password=$API_KEY

Then run the following commands on the system from the root of the repository.

Copy
Copied!
            

helm install route-opt --set ngcKey="$API_KEY" route-optimization-0.1.0.tgz --namespace $NAMESPACE --create-namespace --timeout 3600s --set cuopt.workflow.keycloak.keycloakrealm="$KEYCLOAK_REALM"

To ensure all pods are running, run

Copy
Copied!
            

kubectl get pods -n $NAMESPACE

After validating that all pods are running, proceed to the next step.

© Copyright 2022-2023, NVIDIA. Last updated on Apr 27, 2023.