Audio Transcription
This section will walk through an end-to-end workflow deployment using the example software stack components previously described.
Ensure that the previous Cloud Native Software Requirements section has been completed prior to proceeding with the deployment steps.
SSH into the training VMI (this is the VMI without Cloud Native Stack or Kubernetes installed).
Download https://catalog.ngc.nvidia.com/enterprise/orgs/nvaie/resources/audio-transcription-training
ngc registry resource download-version "nvaie/audio-transcription-training:0.1"
Switch to the training directory.
cd audio-transcription-training_v0.1/
Make the set-up script executable.
chmod +x ./run.sh
Run the setup script.
sudo ./run.sh <YOUR-API-KEY>
NoteThe installer may fail if dpkg does not run cleanly or entirely during the instance provisioning. If this occurs, run the following command to resolve the issue, then retry the installation.
sudo dpkg --configure -a
From a browser, navigate to the Jupyter Notebook URL displayed once the setup script is completed. It is part of the CUSTOM STATES output, e.g.
RUN: { "services": [ { "name": "notebook", "url":"**http://<External-IP>/notebook**" } ]}
Select and run through the Jupyter Notebooks, starting with the Welcome notebook
The training deployment steps are complete.
Before running the deployment workflow. It is important to cleanup the nginx container the training workflow created. Run the following command before going through the Inference workflow.
docker rm -f $(docker ps -a -q)
First, configure Keycloak according to the instructions provided in the Appendix.
Note down these six fields for the Deployment workflow.
Client-id
Client-secret
Realm Name
Username
Password
Token_endpoint
SSH into the inference/deployment VMI.
Once Keycloak has been configured, run the following command on your system via the SSH console to get the access token (replace the
TOKEN_ENDPOINT
,CLIENT_ID
,CLIENT_SECRET
,USERNAME
andPASSWORD
fields with the values previously created).curl -k -L -X POST 'TOKEN_ENDPOINT' -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=CLIENT_ID' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=CLIENT_SECRET' --data-urlencode 'scope=openid' --data-urlencode 'username=USERNAME' --data-urlencode 'password=PASSWORD'
For example:
curl -k -L -X POST 'https://auth.your-domain.com/realms/ai-workflows/protocol/openid-connect/token' -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=merlin-workflow-client' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=vihhgVP76TgA4qDL3c5jUFAN1gixWYT8' --data-urlencode 'scope=openid' --data-urlencode 'username=nvidia' --data-urlencode 'password=hello123'
This will output a JSON string like below
{"access_token":"eyJhbGc...","expires_in":54000,"refresh_expires_in":108000,"refresh_token":"eyJhbGci...","not-before-policy":0,"session_state":"e7e23016-2307-4290-af45-2c79ee79d0a1","scope":"openid email profile"}
Note down the
access_token
, this field will be required later on in the workflow, within the Jupyter notebook.Now we’re ready to deploy the intelligent virtual assistant application.
Ensure the NGC CLI is configured
ngc config set
Download https://catalog.ngc.nvidia.com/enterprise/orgs/nvaie/resources/audio-transcription-deployment
ngc registry resource download-version "nvaie/audio-transcription-deployment:0.1"
Switch to the transcription Helm chart directory.
cd ~/audio-transcription-deployment_v0.1/helm_charts
Install via Helm.
helm -n riva install transcription transcription/ --set ngcCredentials.password=<NGC_KEY> --set workflow.keycloak.keycloakrealm=<WORKFLOW_REALM> --create-namespace
Reference the output from the Helm install and launch the Jupyter Notebook from a browser once the transcription pods are running.
Validate that all of the transcription pods are running.
kubectl get pods -n riva
Run through the provided Jupyter Notebooks to complete the Inference Pipeline.