Step 3: Install Workflow Components & Run The Workflow

Audio Transcription

This section will walk through an end-to-end workflow deployment using the example software stack components previously described.

Ensure that the two nodes provisioned from the previous Hardware Requirements section are accessible.

  • One of the VMIs will be used for the training pipeline.

  • The second Kubernetes cluster node will be used for the inference pipeline.

  1. SSH into the training VMI (this is the VMI without Cloud Native Stack or Kubernetes installed).

  2. Download https://catalog.ngc.nvidia.com/enterprise/orgs/nvaie/resources/audio-transcription-training

    Copy
    Copied!
                

    ngc registry resource download-version "nvaie/audio-transcription-training:0.1"


  3. Switch to the training directory.

    Copy
    Copied!
                

    cd audio-transcription-training_v0.1/


  4. Make the set-up script executable.

    Copy
    Copied!
                

    chmod +x ./run.sh


  5. Run the setup script.

    Copy
    Copied!
                

    sudo ./run.sh <YOUR-API-KEY>

    Note

    The installer may fail if dpkg does not run cleanly or entirely during the instance provisioning. If this occurs, run the following command to resolve the issue, then retry the installation.

    Copy
    Copied!
                

    sudo dpkg --configure -a


  6. From a browser, navigate to the Jupyter Notebook URL displayed once the setup script is completed. It is part of the CUSTOM STATES output, e.g.

    Copy
    Copied!
                

    RUN: { "services": [ { "name": "notebook", "url":"**http://<External-IP>/notebook**" } ]}


  7. Select and run through the Jupyter Notebooks, starting with the Welcome notebook

    at-image1.png

  8. The training deployment steps are complete.

  9. Before running the deployment workflow. It is important to cleanup the nginx container the training workflow created. Run the following command before going through the Inference workflow.

    Copy
    Copied!
                

    docker rm -f $(docker ps -a -q)


As a part of the workflow, we will be demonstrating how to deploy the packaged workflow components as a Helm chart on the previously described Kubernetes-based platform. We will also demonstrate how to interact with the workflow, how each of the components in the pipeline work, and how they all function together.

This includes an example of how to securely send requests to the inference pipeline, using Envoy set up as a proxy to authenticate and authorize requests sent to Triton, and Keycloak as the OIDC identity provider. For more information about the authentication portion of the workflow, refer to the Authentication section in the Appendix.

  1. First, configure Keycloak according to the instructions provided in the Appendix.

  2. Note down these six fields for the Deployment workflow.

    • Client-id

    • Client-secret

    • Realm Name

    • Username

    • Password

    • Token_endpoint

  3. SSH into the inference/deployment VMI.

  4. Once Keycloak has been configured, run the following command on your system via the SSH console to get the access token (replace the TOKEN_ENDPOINT, CLIENT_ID, CLIENT_SECRET, USERNAME and PASSWORD fields with the values previously created).

    Copy
    Copied!
                

    curl -k -L -X POST '<TOKEN_ENDPOINT>' -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=<CLIENT_ID>' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=<CLIENT_SECRET>' --data-urlencode 'scope=openid' --data-urlencode 'username=<USERNAME>' --data-urlencode 'password=<PASSWORD>' | json_pp

    For example:

    Copy
    Copied!
                

    curl -k -L -X POST 'https://auth.your-cluster.your-domain.com/realms/ai-workflows/protocol/openid-connect/token' -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=merlin-workflow-client' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=vihhgVP76TgA4qDL3c5jUFAN1gixWYT8' --data-urlencode 'scope=openid' --data-urlencode 'username=nvidia' --data-urlencode 'password=hello123' | json_pp

    This will output a JSON string like below

    Copy
    Copied!
                

    {"access_token":"eyJhbGc...","expires_in":54000,"refresh_expires_in":108000,"refresh_token":"eyJhbGci...","not-before-policy":0,"session_state":"e7e23016-2307-4290-af45-2c79ee79d0a1","scope":"openid email profile"}

    Note down the access_token, this field will be required later on in the workflow, within the Jupyter notebook.

    Now we’re ready to deploy the intelligent virtual assistant application.

  5. Ensure the NGC CLI is configured

    Copy
    Copied!
                

    ngc config set


  6. Download https://catalog.ngc.nvidia.com/enterprise/orgs/nvaie/resources/audio-transcription-deployment

    Copy
    Copied!
                

    ngc registry resource download-version "nvaie/audio-transcription-deployment:0.1"


  7. Switch to the transcription Helm chart directory.

    Copy
    Copied!
                

    cd ~/audio-transcription-deployment_v0.1/helm_charts


  8. Install via Helm.

    Copy
    Copied!
                

    helm -n riva install transcription transcription/ --set ngcCredentials.password=<NGC_KEY> --set workflow.keycloak.keycloakrealm=<WORKFLOW_REALM> --create-namespace


  9. Reference the output from the Helm install and launch the Jupyter Notebook from a browser once the transcription pods are running.

  10. Validate that all of the transcription pods are running.

    Copy
    Copied!
                

    kubectl get pods -n riva

    at-image2.png

  11. Run through the provided Jupyter Notebooks to complete the Inference Pipeline.

© Copyright 2022-2023, NVIDIA. Last updated on May 23, 2023.