Customize Reference Workflows#

The reference workflows described in section Reference Workflows provide high-level starting points for Tokkio. This section lists the common customizations that can be performed for these workflows.

Some of these customizations require rebuilding the application using UCS tools to create an updated helm chart. Each customization section uses an indicator is_rebuild_needed: Yes/No to indicate that.

Note

This section assumes that you have one of the reference workflows already successfully deployed.

RAG Endpoint Customization#

is_rebuild_needed: No

This is one of the most common customizations, performed easily with single line changes. The LLM RAG workflow uses a NIM endpoint by default. Follow the steps below to tweak the options to cater to the use cases described below:

Use cases:

  • Use OpenAI as the LLM endpoint (instead of the default NIM endpoint)

  • Use RAG pipeline and a custom RAG deployment URL

  • Change the NIM or OpenAI model used as the LLM endpoint.

Steps listed below are valid for any rendering variation of the LLM RAG bot, but are listed with the single stream OV rendering helm chart as an example:

  1. Download the helm chart of the sample LLM RAG workflow from NGC. As an example, here is a link to the single stream OV rendering variation of the helm chart https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ace/helm-charts/ucs-tokkio-app-base-1-stream-llm-rag-3d-ov.

  2. Find the relevant block to update from the values.yaml of the downloaded helm chart folder. Copy it over to a new yaml file. The relevant block for this customization is as shown below. Ensure to copy over the entire block so that it retains the relevant path in the yaml file. You can save the new file with any name after the update. For example: my_override_values.yaml.

    ace-agent-plugin-server:
      pluginConfig:
        plugins:
          rag:
            parameters:
                USE_RAG: false
                RAG_SERVER_URL: http://IP:PORT
                NIM_MODEL: "meta/llama3-8b-instruct" #as defined at https://build.nvidia.com/meta/llama-3_1-8b-instruct
                USE_OPENAI: false
                OPENAI_MODEL: "gpt-4"
    
  3. To use OpenAI as the LLM endpoint (instead of the default NIM endpoint), set USE_OPENAI: true.

  4. To use RAG pipeline and a custom RAG deployment URL, update USE_RAG: true and RAG_SERVER_URL: http://IP:PORT.

  5. To change the NIM or OpenAI model used as the LLM endpoint, update the selection by specifying the value for NIM_MODEL or the OPENAI_MODEL accordingly.

    Note

    Ensure that the response format of the chosen LLM model or the RAG endpoint is compatible with the out of the box implementation for the LLM RAG bot. If not, update would be needed in rag.py from the downloaded source to adapt to the needed request/response format. Source and implementation details for the LLM RAG resource provided here.

  6. Follow the steps listed in Integrating Customization changes without rebuild to reflect the changes in your already deployed Tokkio workflow.

Avatar and Scene Customization#

is_rebuild_needed: No

Use case:

  • Custom Avatar/scene displayed in the UI for Tokkio deployment

Tokkio enables you customize your avatar with the Avatar Configurator or, you can import a third-party avatar by following the custom avatar guide. See the Avatar Customization section for more information.

Note that there are various alternative methods to upload avatar scenes.

This document describes how to upload an ACE avatar scene to NGC using the NGC Resource Downloader and how to update the UCS configuration to reference these new scene files. These instructions apply for custom avatar scenes and for scenes created with the Avatar Configurator.

The steps listed below are valid for any rendering variation of the LLM RAG bot, but are listed with the single stream OV rendering helm chart as an example:

  1. Download the helm chart of the sample LLM RAG workflow from NGC. As an example, here is a link to the single stream OV rendering variation of the helm chart https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ace/helm-charts/ucs-tokkio-app-base-1-stream-llm-rag-3d-ov.

  2. Find the relevant block to update from the values.yaml of the downloaded helm chart folder. Copy it over to a new yaml file. The relevant block for this customization is as shown below. Ensure to copy over the entire block so that it retains the relevant path in the yaml file. Replace PATH_TO_REMOTE_RESOURCE value with the path to your resource location. You can save the new file with any name after the update. For example my_override_values.yaml.

    ia-animation-graph-microservice:
      initContainers:
      - command:
        - /bin/bash
        - download_resource.sh
        env:
        - name: REMOTE_RESOURCE_PATH
          value: "<PATH_TO_REMOTE_RESOURCE>"
    
      resourceDownload:
        remoteResourcePath: "<PATH_TO_REMOTE_RESOURCE>"
    
    ia-omniverse-renderer-microservice:
      initContainers:
      - command:
        - /bin/bash
        - download_resource.sh
        env:
        - name: REMOTE_RESOURCE_PATH
          value: "<PATH_TO_REMOTE_RESOURCE>"
      resourceDownload:
          remoteResourcePath: "<PATH_TO_REMOTE_RESOURCE>"
    
  3. Follow the steps listed in Integrating Customization changes without rebuild to reflect the changes in your already deployed Tokkio workflow.

Avatar Voice Customization (Nvidia Riva TTS)#

is_rebuild_needed: No

Use case:

  • Select a voice option for the custom avatar, when using Riva TTS (default configuration with Tokkio)

Tokkio enables the users to customize the avatar voice easily with Riva TTS configuration. Checkout the selection availability on Riva TTS for more information.

The steps listed below are valid for any rendering variation of the LLM RAG bot, but are listed with the single stream OV rendering helm chart as an example:

  1. Download the helm chart of the sample LLM RAG workflow from NGC. As an example, here is a link to the single stream OV rendering variation of the helm chart https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ace/helm-charts/ucs-tokkio-app-base-1-stream-llm-rag-3d-ov.

  2. Find the relevant block to update from the values.yaml of the downloaded helm chart folder. Copy it over to a new yaml file. The relevant block for this customization is as shown below. Ensure to copy over the entire block so that it retains the relevant path in the yaml file. Replace voice_name value with the selection of your choice from Riva TTS compatible with the speech model selection. The speech model for TTS is also listed below. You can save the new file with any name after the update. For example my_override_values.yaml.

    riva:
      ngcModelConfigs:
        triton0:
          models:
          - nvidia/ace/rmir_asr_parakeet_1-1b_en_us_str_vad:2.17.0
          - nvidia/riva/rmir_tts_fastpitch_hifigan_en_us_ipa:2.17.0
      .
      .
    riva_tts:
      RivaTTS:
        voice_name: English-US.Male-1 # Replace with a voice selection of choice
    
  3. Follow the steps listed in Integrating Customization changes without rebuild to reflect the changes in your already deployed Tokkio workflow.

Plugin Resource Customization#

is_rebuild_needed: No

Use cases:

  • Change the greeting at the start/stop of the conversation

  • Update the request or response schema to connect to the custom RAG pipeline

  • Add filler words to mask the latency of fetching the response from the pipeline

  • Change avatar name

  • Update bot gestures

  • Update TTS pronunciation

Refer to Tokkio LLM RAG to implement various customizations mentioned in the use cases above.

Publish the bot (plugin resource) to your NGC repository using the command below:

$ ngc registry resource upload-version --source BOT_FOLDER_NAME targeted_ngc_path:version

The steps listed below are valid for any rendering variation of the LLM RAG bot, but are listed with the single stream OV rendering helm chart as an example:

  1. Download the helm chart of the sample LLM RAG workflow from NGC. As an example, here is a link to the single stream OV rendering variation of the helm chart https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ace/helm-charts/ucs-tokkio-app-base-1-stream-llm-rag-3d-ov.

  2. Find the relevant block to update from the values.yaml of the downloaded helm chart folder. Copy it over to a new yaml file. The relevant block for this customization is as shown below. Ensure to copy over the entire block so that it retains the relevant path in the yaml file. You can save the new file with any name after the update. For example my_override_values.yaml :

    ace-agent-chat-controller:
        configNgcPath: "<path to the custom bot resource on ngc>"
        .
        .
    ace-agent-chat-engine:
        configNgcPath: <path to the custom bot resource on ngc>
        .
        .
    ace-agent-plugin-server:
        configNgcPath: "<path to the custom bot resource on ngc>"
        .
        .
    # Optional edit. Only applicable is using NLP server
    ace-agent-nlp-server:
        configNgcPath: "<path to the custom bot resource on ngc>"
    
  3. Follow the steps listed in Integrating Customization changes without rebuild to reflect the changes in your already deployed Tokkio workflow.

Disable User Attention#

is_rebuild_needed: No

Use case:

  • Disable the bot behavior of responding to the user queries only when user is directly looking into the camera.

The steps listed below are valid for any rendering variation of the LLM RAG bot, but are listed with the single stream OV rendering helm chart as an example:

  1. Download the helm chart of the sample LLM RAG workflow from NGC. As an example, here is a link to the single stream OV rendering variation of the helm chart https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ace/helm-charts/ucs-tokkio-app-base-1-stream-llm-rag-3d-ov.

  2. Find the relevant block to update from the values.yaml of the downloaded helm chart folder. Copy it over to a new yaml file. The relevant block for this customization is as shown below. Ensure to copy over the entire block so that it retains the relevant path in the yaml file. To disable the user attention, set the enableUserAttention flag to false in the block as shown below. You can save the new file with any name after the update. For example my_override_values.yaml.

    ace-agent-chat-engine:
        enableUserAttention: "false"
    
  3. Follow the steps listed in Integrating Customization changes without rebuild to reflect the changes in your already deployed Tokkio workflow.

Vision AI customizations#

is_rebuild_needed: Yes

Use cases:

  • Using Tokkio without a webcam or camera

  • Disabling Computer Vision Services in the Tokkio Backend

Instructions:

  • Using Tokkio without a webcam or camera

To remove the vision Tokkio without using the webcam refer to the (UI frontend modifications) section. In this case, a chart rebuild is not required since it’s just a UI change.

  • Disabling Computer Vision Services in the Tokkio Backend

To disable the Computer Vision Services in the Tokkio Backend refer to Disabling vision with backend modifications. In this case, a chart rebuild is required and one can follow Follow the steps listed in Integrating Customization changes with rebuild to reflect the changes by re-deploying your Tokkio workflow.

Third party TTS Voice Customization#

is_rebuild_needed: Yes

Use case:

  • Use Eleven Labs or other third party TTS solution with Tokkio

Refer to Using third party TTS solution for configuring a 3rd party TTS solution for speech response. Note that this customization requires using an additional microservice, NLP server. The app.yaml and app-params.yaml referred to in the tutorial are tokkio-app.yaml and tokkio-app-params.yaml for Tokkio. Update these files as per the instructions in the above linked tutorial.

Follow the steps listed in Integrating Customization changes with rebuild to reflect the changes by re-deploying your Tokkio workflow.

A2F-2D Customization#

is_rebuild_needed: Yes

Use case:

  • Customize A2F-2D rendering such as animation quality, animation property tuning, input / output media specifications, and more.

1. Create a JSON config file#

When initiating the gRPC connection with the A2F-2D microservice, a JSON object stored in a file is used to specific the animation and input / output media configurations.

For demonstration, the default lp_config.json that is referenced by the Chat-Controller MS shown below. Users can create their own JSON file by follow the example below to customize the mentioned configurations and properties based on their preferences.

{
  "lp_config": {
    "animation_cropping_mode": "ANIMATION_CROPPING_MODE_BLEND",
    "model_selection": "MODEL_SELECTION_PERF",
    "eye_blink_config": {
      "blink_frequency": {
        "value": 15,
        "unit": "UNIT_TIMES_PER_MINUTE"
      },
      "blink_duration": {
        "value": 6,
        "unit": "UNIT_FRAME"
      }
    },
    "gaze_look_away_config": {
      "enable_gaze_look_away": false,
      "max_look_away_offset": {
        "value": 20,
        "unit": "UNIT_DEGREE_ANGLE"
      },
      "min_look_away_interval": {
        "value": 240,
        "unit": "UNIT_FRAME"
      },
      "look_away_interval_range": {
        "value": 60,
        "unit": "UNIT_FRAME"
      }
    },
    "mouth_expression_config": {
      "mouth_expression_multiplier": 1.0
    }
  },
  "endpoint_config": {
    "input_media_config": {
      "audio_input_config": {
        "stream_config": {
          "stream_type": "GRPC"
        },
        "channels": 1,
        "channel_index": 0,
        "layout": "AUDIO_LAYOUT_INTERLEAVED",
        "sample_rate_hz": 16000,
        "chunk_duration_ms": 20,
        "encoding": "AUDIO_ENCODING_RAW",
        "decoder_config": {
          "raw_dec_config": {
            "format": "AUDIO_FORMAT_S16LE"
          }
        }
      }
    },
    "output_media_config": {
      "audio_output_config": {
        "stream_config": {
          "stream_type": "UDP",
          "udp_params": {
            "host": "127.0.0.1",
            "port": "9017"
          }
        },
        "payloader_config": {
          "type": "PAYLOADER_RTP"
        },
        "sample_rate_hz": 16000,
        "chunk_duration_ms": 20,
        "encoding": "AUDIO_ENCODING_RAW",
        "encoder_config": {
          "raw_enc_config": {
            "format": "AUDIO_FORMAT_S16BE"
          }
        }
      },
      "video_output_config": {
        "stream_config": {
          "stream_type": "UDP",
          "udp_params": {
              "host": "127.0.0.1",
              "port": "9019"
            }
        },
        "payloader_config": {
          "type": "PAYLOADER_RTP"
        },
        "encoding": "H264",
        "encoder_config": {
          "h264_enc_config": {
            "idr_frame_interval": 30
          }
        }
      }
    }
  },
  "quality_profile": "SPEECH_LP_QUALITY_PROFILE_LOW_LATENCY"
}

Configuration options:

  • animation_cropping_mode - Portrait image cropping preference - ANIMATION_CROPPING_MODE_FACEBOX, ANIMATION_CROPPING_MODE_BLEND, ANIMATION_CROPPING_MODE_INSET_BLEND

  • model_selection - MODEL_SELECTION_PERF for performance mode and MODEL_SELECTION_QUALITY for quality mode

  • eye_blink_config ⇒ Customize the eye blink behavior of the avatar such as blink_frequency and blink_duration

  • gaze_look_away_config ⇒ Redirect the eyes to look away and specify the angle as well as the intervals

  • mouth_expression_config ⇒ Multiplier to exaggerate the mouth expression

  • quality_profile ⇒ Different modes of execution based on the preference of performance vs. quality - SPEECH_LP_QUALITY_PROFILE_LOW_LATENCY, SPEECH_LP_QUALITY_PROFILE_ULTRA_LOW_LATENCY, SPEECH_LP_QUALITY_PROFILE_HIGH_QUALITY, SPEECH_LP_QUALITY_PROFILE_ULTRA_HIGH_QUALITY

For details on the configuration options, refer to the protos/v1/speech_live_portrait.proto under the A2F-2D quick start guide file browser.

Specify the path of the JSON file created from above step in the tokkio-app.yaml by replacing <path_to_lp_config> as below:

...
- name: chat-controller
  type: ucf.svc.ace-agent.chat-controller
  parameters:
    imagePullSecrets:
      - name: ngc-docker-reg-secret
  secrets:
    ngc-api-key-secret: k8sSecret/ngc-api-key-secret/NGC_CLI_API_KEY
  files:
    lp_config.json: <path_to_lp_config>
...

Follow the steps listed in Integrating Customization changes with rebuild to reflect the changes by re-deploying your Tokkio workflow.

Number of concurrent streams customizations#

is_rebuild_needed: Yes

Use case:

  • User may scale to any number of concurrent streams on a single node constrained only by the system resource such RAM, GPU Frame buffer size, and compute

Tokkio app variants are compiled for running 1, 3, and 6 concurrent streams, refer to Reference Workflows for detail on those pre-built options. The following information are for users who wish to run any number of streams that is not included in one of the pre-built variants.

The number of max concurrency is majorly decided by the number of GPUs on the deployment node. The rule of thumb is max_concurrency = num_GPU - 1 for T4 type GPU, and max_concurrency = (num_GPU - 1) * 2 for A10 and L4 type GPU.

For example, 4 A10 GPUs will allow max concurrency of (4 - 1) * 2 = 6 streams, and 4 T4 GPUs will only allow max concurrency of 4 - 1 = 3 streams

You will need at least 2 of any type of supported GPUs to run at least 1 stream. Please make sure you have enough compute allocated for the desired amount of concurrency before going into the configuration section

Configuration:

Please apply the following changes to tokkio-app-params.yaml and follow the steps listed in Integrating Customization changes with rebuild to reflect the changes by re-deploying your Tokkio workflow.

...
---
animation-graph:
  animationServer:
    maxStreamCapacity: <max_concurrency>
  ucfVisibleGpus:
    - 1
audio2face-3d:
  configs:
    deployment_config.yaml:
      common:
        stream_number: <max_concurrency>
      endpoints:
        use_bidirectional: false
  ucfVisibleGpus:
    - 0
avatar-renderer:
  deployment:
    gpuAllocLimit: <1 for T4, and 2 for A10 or L4>
  replicas:  <max_concurrency>
  ucfVisibleGpus: <[1 ~ number of GPUs - 1]>, e.g., [1, 2, 3] for 4 GPUs
ds-visionai:
  ucfVisibleGpus:
    - 0
renderer-sdr:
  sdrMaxReplicas: '<max_concurrency>'
tokkio-ingress-mgr:
  configs:
    ingress:
      maxNumSession: <max_concurrency>
ue-renderer-sdr:
  sdrMaxReplicas: '<max_concurrency>'
unreal-renderer:
  deployment:
    gpuDisableAlloc: false
    gpuAllocLimit: <1 for T4, and 2 for A10 or L4>
  replicas: <max_concurrency>
  ucfVisibleGpus: <[1 ~ number of GPUs - 1]>, e.g., [1, 2, 3] for 4 GPUs
vms:
configs:
  vst_config.json:
    network:
      max_webrtc_in_connections: <max_concurrency * 2>
      max_webrtc_out_connections: <max_concurrency * 2>
  ucfVisibleGpus:
    - 0
...

UI Customization#

is_rebuild_needed: No

Use case:

  • Use a custom UI layout for Tokkio

The UI is not currently a part of the Tokkio helm chart. So, UI changes are independent from the rest of the Tokkio microservices. Refer to the UI Customization Guide for different UI customization options.

Audio2Face-3D Microservice Customization#

is_rebuild_needed: No

Use case:

  • Tune avatar’s facial animation and lip motion

Tokkio uses the Audio2Face microservice to drive the avatar’s facial animation including the lip motion. The Audio2Face-3D microservice takes avatar speech as input and generates a facial animation stream. The quality of the facial animation depends on the 3D avatar model and its blendshape setup. In addition, the generation also depends on the voice that drives that animation. To account for the different avatar assets and the different voice inputs the Audio2Face-3D microservice exposes a handful of parameters that can be tuned to improve the visual quality of the facial animation.

See the Audio2Face-3D parameter tuning guide for more information on various parameters.

The steps listed below are valid for any rendering variation of the LLM RAG bot, but are listed with the single stream OV rendering helm chart as an example:

  1. Download the helm chart of the sample LLM RAG workflow from NGC. Here is a link to the single stream OV rendering variation of the helm chart https://catalog.ngc.nvidia.com/orgs/nvidia/teams/ace/helm-charts/ucs-tokkio-app-base-1-stream-llm-rag-3d-ov.

  2. Find the relevant block to update from the values.yaml of the downloaded helm chart folder. Copy it over to a new yaml file. The relevant block for this customization is as shown below. Ensure to copy over the entire block so that it retains the relevant path in the yaml file. The following Audio2Face-3D example configuration has been optimized for the ElevenLab’s Jessica voice and the Claire avatar model. You can save the new file with any name after the update. For example my_override_values.yaml.

Note

Claire avatar asset is not available publicly. Please reach out to your NVIDIA contact for access.

# These parameters are tuned for the Claire avatar asset and ElevenLabs' Jessica voice.

# ...

audio2face-with-emotion:
    configs:
        a2f_config.yaml:
            # ...

            # Model parameters
            a2fModelName: "claire_v1.3"

            # Face parameters
            # Note: All keys must be provided
            faceParams: |
              {
                "face_params": {
                  "input_strength": 1.0,
                  "prediction_delay": 0.15,
                  "upper_face_smoothing": 0.0010000000474974513,
                  "lower_face_smoothing": 0.00800000037997961,
                  "upper_face_strength": 1.7000000476837158,
                  "lower_face_strength": 1.25,
                  "face_mask_level": 0.6000000238418579,
                  "face_mask_softness": 0.008500000461935997,
                  "emotion": [
                    0.30000001192092896,
                    0.0,
                    0.20000000298023224,
                    0.0,
                    0.0,
                    0.0,
                    0.3499999940395355,
                    0.0,
                    0.0,
                    0.0
                  ],
                  "skin_strength": 1.0,
                  "lip_close_offset": 0.05219999700784683,
                  "eyelid_offset": 0.009999999776482582,
                  "source_shot": "cp1_neutral",
                  "source_frame": 10,
                  "blink_strength": 1.0,
                  "lower_teeth_strength": 1.25,
                  "lower_teeth_height_offset": 0.0,
                  "lower_teeth_depth_offset": 0.0,
                  "tongue_strength": 1.3,
                  "tongue_height_offset": 0.0,
                  "tongue_depth_offset": 0.0,
                  "eyeballs_strength": 1.0,
                  "saccade_strength": 0.6,
                  "right_eye_rot_x_offset": 0.0,
                  "right_eye_rot_y_offset": 0.0,
                  "left_eye_rot_x_offset": 0.0,
                  "left_eye_rot_y_offset": 0.0,
                  "blink_interval": 3.0,
                  "eye_saccade_seed": 0,
                  "keyframer_fps": 60.0
                }
              }

            # Emotion parameters
            a2eEnabled: "True"
            a2eEmotionContrast: "1.0"
            a2eLiveBlendCoef: "1.0"
            a2eEnablePreferredEmotion: "True"
            a2ePreferredEmotionStrength: "0.75"
            a2eEmotionStrength: "1.0"
            a2eMaxEmotions: "3"

            # Blendshape parameters
            bsWeightMultipliers: [
                1.0,                 # EyeBlinkLeft
                1.0,                 # EyeLookDownLeft
                1.0,                 # EyeLookInLeft
                1.0,                 # EyeLookOutLeft
                1.0,                 # EyeLookUpLeft
                1.5,                 # EyeSquintLeft
                1.5,                 # EyeWideLeft
                1.0,                 # EyeBlinkRight
                1.0,                 # EyeLookDownRight
                1.0,                 # EyeLookInRight
                1.0,                 # EyeLookOutRight
                1.0,                 # EyeLookUpRight
                1.5,                 # EyeSquintRight
                1.5,                 # EyeWideRight
                1.0,                 # JawForward
                1.0,                 # JawLeft
                1.0,                 # JawRight
                0.75,                # JawOpen
                0.5299999713897705,  # MouthClose
                1.2999999523162842,  # MouthFunnel
                1.5,                 # MouthPucker
                1.0,                 # MouthLeft
                1.0,                 # MouthRight
                0.8999999761581421,  # MouthSmileLeft
                0.8999999761581421,  # MouthSmileRight
                0.5,                 # MouthFrownLeft
                0.5,                 # MouthFrownRight
                0.800000011920929,   # MouthDimpleLeft
                0.800000011920929,   # MouthDimpleRight
                0.800000011920929,   # MouthStretchLeft
                0.800000011920929,   # MouthStretchRight
                1.0,                 # MouthRollLower
                0.5999999642372131,  # MouthRollUpper
                1.0,                 # MouthShrugLower
                1.0,                 # MouthShrugUpper
                1.0,                 # MouthPressLeft
                1.0,                 # MouthPressRight
                1.2000000476837158,  # MouthLowerDownLeft
                1.2000000476837158,  # MouthLowerDownRight
                1.0,                 # MouthUpperUpLeft
                1.0,                 # MouthUpperUpRight
                1.0,                 # BrowDownLeft
                1.0,                 # BrowDownRight
                1.7599999904632568,  # BrowInnerUp
                1.0,                 # BrowOuterUpLeft
                1.0,                 # BrowOuterUpRight
                0.800000011920929,   # CheekPuff
                0.6000000238418579,  # CheekSquintLeft
                0.6000000238418579,  # CheekSquintRight
                0.6000000238418579,  # NoseSneerLeft
                0.6000000238418579,  # NoseSneerRight
                1.0,                 # TongueOut
            ]
            
# ...
  1. Follow the steps listed in Integrating Customization changes without rebuild to reflect the changes in your already deployed Tokkio workflow.

Renderer specific customization#

Refer to the Reference Workflows for Omniverse Renderer, Live Portrait or Unreal Engine and their related customizations.

Adding a new microservice#

If required, you can also add a new microservice to Tokkio to customize your use case. Refer to the UCS tool to create and build a new UCS microservice.

Note

The addition of a microservice to Tokkio workflow must be done with careful consideration of the the endpoints that the microservice would interface with. You may use UCS Studio to visualize and connect the microservice endpoints for additional microservices.

Once the microservice is added to the Tokkio graph in UCS Studio, follow the steps listed in Integrating Customization changes with rebuild to reflect the changes by re-deploying your Tokkio workflow.