Kubernetes Deployment#

To start with a simple kubernetes deployment of an Audio2Face cluster, you can use our quick deployment script.

It assumes the following dependencies:
  • microk8s

  • microk8s GPU addon running the NVIDIA GPU operator.

  • A valid NGC token to connect to NVIDIA’s image repository as well as a configured NGC CLI.

Make sure you have an NVAIE Access and your Personal Key.

Note

If this is the first time you are deploying A2F and your host machine already have a kubernetes setup, please remove ~/.kube/ folder. You may make a backup of this folder if required:

$ mv ~/.kube ~/.kube.bk

1. Install microk8s, GPU addon, set local path provisioner and set secrets#

Run the following command to install microk8s with the GPU addon with snap.

$ if [ -d $HOME/.kube ]; then
          echo "kubernetes already setup; skipping microk8s installation; please rm -rf $HOME/.kube to force installation"
  else
          sudo snap install microk8s --revision=5891 --classic
          sudo microk8s enable gpu --version v23.6.1
          sudo snap install --classic kubectl
          sudo snap install helm --classic
          sudo usermod -a -G microk8s ${USER}
          mkdir -p $HOME/.kube
          sudo microk8s config > $HOME/.kube/config
          sudo chown $(id -u):$(id -g) $HOME/.kube/config
          sudo microk8s stop
          sudo microk8s refresh-certs --cert ca.crt
          sudo microk8s start
          sudo microk8s status --wait-ready
  fi

Then to avoid using sudo run:

$ newgrp microk8s

Set local path provisioner by running:

$ curl https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.23/deploy/local-path-storage.yaml | sed 's/^  name: local-path$/  name: mdx-local-path/g' | microk8s kubectl apply -f -

Export the NGC API KEY in an environment variable:

$ export NGC_API_KEY=<value>

Delete any pre-existing secrets and set the required secrets with your valid NGC_API_KEY:

$ microk8s kubectl delete secret --ignore-not-found ngc-docker-reg-secret
$ microk8s kubectl delete secret --ignore-not-found ngc-api-key-secret
$ microk8s kubectl create secret docker-registry ngc-docker-reg-secret --docker-username='$oauthtoken' --docker-password=$NGC_API_KEY --docker-server=nvcr.io
$ microk8s kubectl create secret generic ngc-api-key-secret --from-literal=NGC_API_KEY=$NGC_API_KEY

1. Download and install the helm chart#

Use the following command to create a new directory and download the Helm chart into it:

$ export HELM_VERSION=1.2.0
$ mkdir a2f-3d-nim/ && cd a2f-3d-nim/
$ microk8s helm fetch https://helm.ngc.nvidia.com/nim/nvidia/charts/audio2face-3d-"$HELM_VERSION".tgz --username='$oauthtoken' --password=$NGC_API_KEY
$ tar xvf audio2face-3d-$HELM_VERSION.tgz

The Helm chart includes observability features for the A2F-3D service. For additional details about observability, refer to this page: Observability.

More information about updating the configuration

The config map can be found in audio2face-3d/charts/a2f/values.yaml file starting in line 140. It is a concatenation of the same 3 YAML files used in the A2F-3D NIM Manual Container Deployment and Configuration, and appears as follows:

configs:
  advanced_config.yaml
    ...
  deployment_config.yaml
    ...
  stylization_config.yaml
    ...

You can update the values directly in that file. For different models, use the content of the following three stylization files:

claire_stylization_config.yaml
# These are the default emotions applied at the beginning of any audio clip, and it also defines the default preferred emotion.
# Their values range from 0.0 to 1.0
default_beginning_emotions:
  amazement: 0.0
  anger: 0.0
  cheekiness: 0.0
  disgust: 0.0
  fear: 0.0
  grief: 0.0
  joy: 0.0
  outofbreath: 0.0
  pain: 0.0
  sadness: 0.0

a2e:
  enabled: true # Enable audio2emotion, ai-generated audio-driven emotion
  live_transition_time: 0.5 # Controls the smoothness of the output transition toward the target value across frames; higher values result in smoother transitions. Each frame updates at a rate of <frame time length> / <live transition time> (capped at 1.0) toward the raw result.
  post_processing_params:
    emotion_contrast: 1.0 # Increases the spread between emotion values by pushing them higher or lower
    emotion_strength: 0.6 # Sets the strength of generated emotions relative to neutral emotion
    enable_preferred_emotion: true # Activate blending preferred emotion with auto-emotion
    live_blend_coef: 0.7 # Coefficient for exponential smoothing of emotion
    max_emotions: 3 # Sets a firm limit on the quantity of emotion sliders engaged by A2E - emotions with the highest weight will be prioritized
    preferred_emotion_strength: 0.5 # Sets the strength of the preferred emotion (if is loaded) relative to generated emotions

a2f:
  # A2F model, can be one of james_v2.3, claire_v2.3 or mark_v2.3
  inference_model_id: claire_v2.3
  blendshape_id: claire_topo1_v2.1

  face_params:
    eyelid_offset: 0.0 # Adjusts the default pose of eyelid open-close
    face_mask_level: 0.6 # Determines the boundary between the upper and lower regions of the face
    face_mask_softness: 0.0085 # Determines how smoothly the upper and lower face regions blend on the boundary
    input_strength: 1.0 # Controls the magnitude of the input audio
    lip_close_offset: 0.0 # Adjusts the default pose of lip close-open
    lower_face_smoothing: 0.006 # Applies temporal smoothing to the lower face motion
    lower_face_strength: 1.25 # Controls the range of motion on the lower regions of the face
    skin_strength: 1.0 # Controls the range of motion of the skin
    upper_face_smoothing: 0.001 # Applies temporal smoothing to the upper face motion
    upper_face_strength: 1.0 # Controls the range of motion on the upper regions of the face

  blendshape_params: # Modulates the effect of each blendshapes. Gain * w + offset
    enable_clamping_bs_weight: false

    # Multiplier for each blendshape output. This list depends on the blendshape model.
    weight_multipliers:
      EyeBlinkLeft: 1.0
      EyeLookDownLeft: 1.0
      EyeLookInLeft: 1.0
      EyeLookOutLeft: 1.0
      EyeLookUpLeft: 1.0
      EyeSquintLeft: 1.0
      EyeWideLeft: 1.0
      EyeBlinkRight: 1.0
      EyeLookDownRight: 1.0
      EyeLookInRight: 1.0
      EyeLookOutRight: 1.0
      EyeLookUpRight: 1.0
      EyeSquintRight: 1.0
      EyeWideRight: 1.0
      JawForward: 1.0
      JawLeft: 1.0
      JawRight: 1.0
      JawOpen: 1.0
      MouthClose: 1.0
      MouthFunnel: 1.0
      MouthPucker: 1.0
      MouthLeft: 1.0
      MouthRight: 1.0
      MouthSmileLeft: 1.0
      MouthSmileRight: 1.0
      MouthFrownLeft: 1.0
      MouthFrownRight: 1.0
      MouthDimpleLeft: 1.0
      MouthDimpleRight: 1.0
      MouthStretchLeft: 1.0
      MouthStretchRight: 1.0
      MouthRollLower: 1.0
      MouthRollUpper: 1.0
      MouthShrugLower: 1.0
      MouthShrugUpper: 1.0
      MouthPressLeft: 1.0
      MouthPressRight: 1.0
      MouthLowerDownLeft: 1.0
      MouthLowerDownRight: 1.0
      MouthUpperUpLeft: 1.0
      MouthUpperUpRight: 1.0
      BrowDownLeft: 1.0
      BrowDownRight: 1.0
      BrowInnerUp: 1.0
      BrowOuterUpLeft: 1.0
      BrowOuterUpRight: 1.0
      CheekPuff: 1.0
      CheekSquintLeft: 1.0
      CheekSquintRight: 1.0
      NoseSneerLeft: 1.0
      NoseSneerRight: 1.0
      TongueOut: 1.0

    # Constant offset for each blendshape output. This list depends on the blendshape model.
    weight_offsets:
      EyeBlinkLeft: 0.0
      EyeLookDownLeft: 0.0
      EyeLookInLeft: 0.0
      EyeLookOutLeft: 0.0
      EyeLookUpLeft: 0.0
      EyeSquintLeft: 0.0
      EyeWideLeft: 0.0
      EyeBlinkRight: 0.0
      EyeLookDownRight: 0.0
      EyeLookInRight: 0.0
      EyeLookOutRight: 0.0
      EyeLookUpRight: 0.0
      EyeSquintRight: 0.0
      EyeWideRight: 0.0
      JawForward: 0.0
      JawLeft: 0.0
      JawRight: 0.0
      JawOpen: 0.0
      MouthClose: 0.0
      MouthFunnel: 0.0
      MouthPucker: 0.0
      MouthLeft: 0.0
      MouthRight: 0.0
      MouthSmileLeft: 0.0
      MouthSmileRight: 0.0
      MouthFrownLeft: 0.0
      MouthFrownRight: 0.0
      MouthDimpleLeft: 0.0
      MouthDimpleRight: 0.0
      MouthStretchLeft: 0.0
      MouthStretchRight: 0.0
      MouthRollLower: 0.0
      MouthRollUpper: 0.0
      MouthShrugLower: 0.0
      MouthShrugUpper: 0.0
      MouthPressLeft: 0.0
      MouthPressRight: 0.0
      MouthLowerDownLeft: 0.0
      MouthLowerDownRight: 0.0
      MouthUpperUpLeft: 0.0
      MouthUpperUpRight: 0.0
      BrowDownLeft: 0.0
      BrowDownRight: 0.0
      BrowInnerUp: 0.0
      BrowOuterUpLeft: 0.0
      BrowOuterUpRight: 0.0
      CheekPuff: 0.0
      CheekSquintLeft: 0.0
      CheekSquintRight: 0.0
      NoseSneerLeft: 0.0
      NoseSneerRight: 0.0
      TongueOut: 0.0
james_stylization_config.yaml
# These are the default emotions applied at the beginning of any audio clip.
# Their values range from 0.0 to 1.0
default_beginning_emotions:
  amazement: 0.0
  anger: 0.0
  cheekiness: 0.0
  disgust: 0.0
  fear: 0.0
  grief: 0.0
  joy: 0.0
  outofbreath: 0.0
  pain: 0.0
  sadness: 0.0

a2e:
  enabled: true
  live_transition_time: 0.5
  post_processing_params:
    emotion_contrast: 1.0 # Increases the spread between emotion values by pushing them higher or lower
    emotion_strength: 0.6 # Sets the strength of generated emotions relative to neutral emotion
    enable_preferred_emotion: true # Activate blending preferred emotion with auto-emotion
    live_blend_coef: 0.7 # Coefficient for exponential smoothing of emotion
    max_emotions: 3 # Sets a firm limit on the quantity of emotion sliders engaged by A2E - emotions with the highest weight will be prioritized
    preferred_emotion_strength: 0.5 # Sets the strength of the preferred emotion (if is loaded) relative to generated emotions

a2f:
  # A2F model, can be one of james_v2.3, claire_v2.3 or mark_v2.3
  inference_model_id: james_v2.3
  blendshape_id: james_topo2_v2.2

  face_params:
    eyelid_offset: 0.06 # Adjusts the default pose of eyelid open-close
    face_mask_level: 0.6 # Determines the boundary between the upper and lower regions of the face
    face_mask_softness: 0.0085 # Determines how smoothly the upper and lower face regions blend on the boundary
    input_strength: 1.0 # Controls the magnitude of the input audio
    lip_close_offset: -0.02 # Adjusts the default pose of lip close-open
    lower_face_smoothing: 0.006 # Applies temporal smoothing to the lower face motion
    lower_face_strength: 1.2 # Controls the range of motion on the lower regions of the face
    skin_strength: 1.0 # Controls the range of motion of the skin
    upper_face_smoothing: 0.001 # Applies temporal smoothing to the upper face motion
    upper_face_strength: 1.0 # Controls the range of motion on the upper regions of the face

  blendshape_params: # Modulates the effect of each blendshapes. Gain * w + offset
    enable_clamping_bs_weight: false

    weight_multipliers:
      EyeBlinkLeft: 1.0
      EyeLookDownLeft: 1.0
      EyeLookInLeft: 1.0
      EyeLookOutLeft: 1.0
      EyeLookUpLeft: 1.0
      EyeSquintLeft: 1.0
      EyeWideLeft: 1.0
      EyeBlinkRight: 1.0
      EyeLookDownRight: 1.0
      EyeLookInRight: 1.0
      EyeLookOutRight: 1.0
      EyeLookUpRight: 1.0
      EyeSquintRight: 1.0
      EyeWideRight: 1.0
      JawForward: 1.0
      JawLeft: 1.0
      JawRight: 1.0
      JawOpen: 1.0
      MouthClose: 1.0
      MouthFunnel: 1.0
      MouthPucker: 1.0
      MouthLeft: 1.0
      MouthRight: 1.0
      MouthSmileLeft: 1.0
      MouthSmileRight: 1.0
      MouthFrownLeft: 1.0
      MouthFrownRight: 1.0
      MouthDimpleLeft: 1.0
      MouthDimpleRight: 1.0
      MouthStretchLeft: 1.0
      MouthStretchRight: 1.0
      MouthRollLower: 1.0
      MouthRollUpper: 1.0
      MouthShrugLower: 1.0
      MouthShrugUpper: 1.0
      MouthPressLeft: 1.0
      MouthPressRight: 1.0
      MouthLowerDownLeft: 1.0
      MouthLowerDownRight: 1.0
      MouthUpperUpLeft: 1.0
      MouthUpperUpRight: 1.0
      BrowDownLeft: 1.0
      BrowDownRight: 1.0
      BrowInnerUp: 1.0
      BrowOuterUpLeft: 1.0
      BrowOuterUpRight: 1.0
      CheekPuff: 1.0
      CheekSquintLeft: 1.0
      CheekSquintRight: 1.0
      NoseSneerLeft: 1.0
      NoseSneerRight: 1.0
      TongueOut: 1.0

    weight_offsets:
      EyeBlinkLeft: 0.0
      EyeLookDownLeft: 0.0
      EyeLookInLeft: 0.0
      EyeLookOutLeft: 0.0
      EyeLookUpLeft: 0.0
      EyeSquintLeft: 0.0
      EyeWideLeft: 0.0
      EyeBlinkRight: 0.0
      EyeLookDownRight: 0.0
      EyeLookInRight: 0.0
      EyeLookOutRight: 0.0
      EyeLookUpRight: 0.0
      EyeSquintRight: 0.0
      EyeWideRight: 0.0
      JawForward: 0.0
      JawLeft: 0.0
      JawRight: 0.0
      JawOpen: 0.0
      MouthClose: 0.0
      MouthFunnel: 0.0
      MouthPucker: 0.0
      MouthLeft: 0.0
      MouthRight: 0.0
      MouthSmileLeft: 0.0
      MouthSmileRight: 0.0
      MouthFrownLeft: 0.0
      MouthFrownRight: 0.0
      MouthDimpleLeft: 0.0
      MouthDimpleRight: 0.0
      MouthStretchLeft: 0.0
      MouthStretchRight: 0.0
      MouthRollLower: 0.0
      MouthRollUpper: 0.0
      MouthShrugLower: 0.0
      MouthShrugUpper: 0.0
      MouthPressLeft: 0.0
      MouthPressRight: 0.0
      MouthLowerDownLeft: 0.0
      MouthLowerDownRight: 0.0
      MouthUpperUpLeft: 0.0
      MouthUpperUpRight: 0.0
      BrowDownLeft: 0.0
      BrowDownRight: 0.0
      BrowInnerUp: 0.0
      BrowOuterUpLeft: 0.0
      BrowOuterUpRight: 0.0
      CheekPuff: 0.0
      CheekSquintLeft: 0.0
      CheekSquintRight: 0.0
      NoseSneerLeft: 0.0
      NoseSneerRight: 0.0
      TongueOut: 0.0
mark_stylization_config.yaml
# These are the default emotions applied at the beginning of any audio clip.
# Their values range from 0.0 to 1.0
default_beginning_emotions:
  amazement: 0.0
  anger: 0.0
  cheekiness: 0.0
  disgust: 0.0
  fear: 0.0
  grief: 0.0
  joy: 0.0
  outofbreath: 0.0
  pain: 0.0
  sadness: 0.0

a2e:
  enabled: true
  live_transition_time: 0.5
  post_processing_params:
    emotion_contrast: 1.0 # Increases the spread between emotion values by pushing them higher or lower
    emotion_strength: 0.6 # Sets the strength of generated emotions relative to neutral emotion
    enable_preferred_emotion: true # Activate blending preferred emotion with auto-emotion
    live_blend_coef: 0.7 # Coefficient for exponential smoothing of emotion
    max_emotions: 3 # Sets a firm limit on the quantity of emotion sliders engaged by A2E - emotions with the highest weight will be prioritized
    preferred_emotion_strength: 0.5 # Sets the strength of the preferred emotion (if is loaded) relative to generated emotions

a2f:
  # A2F model, can be one of james_v2.3, claire_v2.3 or mark_v2.3
  inference_model_id: mark_v2.3
  blendshape_id: mark_topo1_v2.1

  face_params:
    eyelid_offset: 0.06 # Adjusts the default pose of eyelid open-close
    face_mask_level: 0.6 # Determines the boundary between the upper and lower regions of the face
    face_mask_softness: 0.0085 # Determines how smoothly the upper and lower face regions blend on the boundary
    input_strength: 1.3 # Controls the magnitude of the input audio
    lip_close_offset: -0.03 # Adjusts the default pose of lip close-open
    lower_face_smoothing: 0.0023 # Applies temporal smoothing to the lower face motion
    lower_face_strength: 1.4 # Controls the range of motion on the lower regions of the face
    skin_strength: 1.1 # Controls the range of motion of the skin
    upper_face_smoothing: 0.001 # Applies temporal smoothing to the upper face motion
    upper_face_strength: 1.0 # Controls the range of motion on the upper regions of the face

  blendshape_params: # Modulates the effect of each blendshapes. Gain * w + offset
    enable_clamping_bs_weight: false

    weight_multipliers:
      EyeBlinkLeft: 1.0
      EyeLookDownLeft: 1.0
      EyeLookInLeft: 1.0
      EyeLookOutLeft: 1.0
      EyeLookUpLeft: 1.0
      EyeSquintLeft: 1.0
      EyeWideLeft: 1.0
      EyeBlinkRight: 1.0
      EyeLookDownRight: 1.0
      EyeLookInRight: 1.0
      EyeLookOutRight: 1.0
      EyeLookUpRight: 1.0
      EyeSquintRight: 1.0
      EyeWideRight: 1.0
      JawForward: 1.0
      JawLeft: 1.0
      JawRight: 1.0
      JawOpen: 1.0
      MouthClose: 1.0
      MouthFunnel: 1.0
      MouthPucker: 1.0
      MouthLeft: 1.0
      MouthRight: 1.0
      MouthSmileLeft: 1.0
      MouthSmileRight: 1.0
      MouthFrownLeft: 1.0
      MouthFrownRight: 1.0
      MouthDimpleLeft: 1.0
      MouthDimpleRight: 1.0
      MouthStretchLeft: 1.0
      MouthStretchRight: 1.0
      MouthRollLower: 1.0
      MouthRollUpper: 1.0
      MouthShrugLower: 1.0
      MouthShrugUpper: 1.0
      MouthPressLeft: 1.0
      MouthPressRight: 1.0
      MouthLowerDownLeft: 1.0
      MouthLowerDownRight: 1.0
      MouthUpperUpLeft: 1.0
      MouthUpperUpRight: 1.0
      BrowDownLeft: 1.0
      BrowDownRight: 1.0
      BrowInnerUp: 1.0
      BrowOuterUpLeft: 1.0
      BrowOuterUpRight: 1.0
      CheekPuff: 1.0
      CheekSquintLeft: 1.0
      CheekSquintRight: 1.0
      NoseSneerLeft: 1.0
      NoseSneerRight: 1.0
      TongueOut: 1.0

    weight_offsets:
      EyeBlinkLeft: 0.0
      EyeLookDownLeft: 0.0
      EyeLookInLeft: 0.0
      EyeLookOutLeft: 0.0
      EyeLookUpLeft: 0.0
      EyeSquintLeft: 0.0
      EyeWideLeft: 0.0
      EyeBlinkRight: 0.0
      EyeLookDownRight: 0.0
      EyeLookInRight: 0.0
      EyeLookOutRight: 0.0
      EyeLookUpRight: 0.0
      EyeSquintRight: 0.0
      EyeWideRight: 0.0
      JawForward: 0.0
      JawLeft: 0.0
      JawRight: 0.0
      JawOpen: 0.0
      MouthClose: 0.0
      MouthFunnel: 0.0
      MouthPucker: 0.0
      MouthLeft: 0.0
      MouthRight: 0.0
      MouthSmileLeft: 0.0
      MouthSmileRight: 0.0
      MouthFrownLeft: 0.0
      MouthFrownRight: 0.0
      MouthDimpleLeft: 0.0
      MouthDimpleRight: 0.0
      MouthStretchLeft: 0.0
      MouthStretchRight: 0.0
      MouthRollLower: 0.0
      MouthRollUpper: 0.0
      MouthShrugLower: 0.0
      MouthShrugUpper: 0.0
      MouthPressLeft: 0.0
      MouthPressRight: 0.0
      MouthLowerDownLeft: 0.0
      MouthLowerDownRight: 0.0
      MouthUpperUpLeft: 0.0
      MouthUpperUpRight: 0.0
      BrowDownLeft: 0.0
      BrowDownRight: 0.0
      BrowInnerUp: 0.0
      BrowOuterUpLeft: 0.0
      BrowOuterUpRight: 0.0
      CheekPuff: 0.0
      CheekSquintLeft: 0.0
      CheekSquintRight: 0.0
      NoseSneerLeft: 0.0
      NoseSneerRight: 0.0
      TongueOut: 0.0

If the A2F-3D service is already running and you need to update the config map you can:

  1. Export the config map to a file, for example configmap.yaml.

$ microk8s kubectl get configmap a2f-configs-cm -o yaml > configmap.yaml
  1. Edit the file using your preferred text editor.

$ nano configmap.yaml
  1. Apply the updated config map.

$ microk8s kubectl apply -f configmap.yaml
  1. Delete Audio2Face-3D pod to restart it

$ microk8s kubectl delete pods <audio2face_pod_name>

The <audio2face_pod_name> can be found by running:

$ microk8s kubectl get pods
NAME                                  READY   STATUS             RESTARTS       AGE
a2f-a2f-deployment-xxx-xxx            1/1     Running            0              1h

Here it is: a2f-a2f-deployment-xxx-xxx

You can then install the Helm chart by executing the following command:

$ microk8s helm install a2f-3d-nim audio2face-3d/

2. Optional - Expose the pod IP publicly#

You need to expose the pod ID publicly in order to access the A2F-3D NIM from another machine. If this is not your usecase, you can skip this step.

The following yaml file defines the set of ports for communication between external systems and the a2f-a2f-deployment-a2f-service pod in the Kubernetes cluster. Read the comments to understand the file structure.

service_expose.yaml
apiVersion: v1
kind: Service
metadata:
  name: a2f-a2f-deployment-a2f-service
spec:
  type: NodePort
  ports:
    # Port 50010 serves A2F-3D gRPC service.
    - port: 50010 # The port used internally by the Kubernetes cluster for communication.
      targetPort: 50010 # The port on the application (pod) that the service forwards traffic to.
      nodePort: 30010 # The externally accessible port on the node that maps to the service. External traffic to NodeIP:30010 is routed to PodIP:50010.
    # Port 8000 serves A2F-3D NIM Http service.
    - port: 8000
      targetPort: 8000
      nodePort: 30020
    # Port 9464 serves A2F-3D Prometheus Endpoint.
    - port: 9464
      targetPort: 9464
      nodePort: 30030

You can adjust the external ports as needed, ensuring they fall within the 30000–32767 range. To apply the YAML file and expose the specified ports, run the following command:

$ sudo microk8s kubectl apply -f service_expose.yaml

Then you need to delete Audio2Face-3D pod to restart it.

$ microk8s kubectl delete pods <audio2face_pod_name>

The <audio2face_pod_name> can be found by running:

$ microk8s kubectl get pods
NAME                                  READY   STATUS             RESTARTS       AGE
a2f-a2f-deployment-xxx-xxx            1/1     Running            0              1h

Here it is: a2f-a2f-deployment-xxx-xxx

3. Check it is up and running#

Execute the following command to check if the app was deployed successfully. Please note that the command output may vary.

$ microk8s kubectl get pods
  NAME                                 READY   STATUS    RESTARTS   AGE
  a2f-a2f-deployment-94d89f979-v6n24   0/1     Init:0/1  0          53s

Note

If you see the status being ImagePullBackOff, then reconfigure the secrets. Make sure your NGC_API_KEY is valid.

Note

If you see the status being Pending, then you need to set again the local path provisioner:

$ curl https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.23/deploy/local-path-storage.yaml | sed 's/^  name: local-path$/  name: mdx-local-path/g' | microk8s kubectl apply -f -

Wait some time until STATUS changes to Running.

To run a client application, you will need the external IP and port to connect to. You can obtain this information by executing the following command:

$ microk8s kubectl get svc
  NAME                             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
  a2f-a2f-deployment-a2f-service   ClusterIP   <ip>            <none>        50010/TCP,9464/TCP,8000/TCP   56m

You can then try to interact with the Audio2Face-3D using for example the sample app provided.

Uninstall the app#

To uninstall the app, run:

$ microk8s helm uninstall a2f-3d-nim