GCP CSP Automated Setup Guide#
Overview#
In this guide, we will go through the steps needed to:
Understand the architecture of the infrastructure we will be setting up to host the Tokkio Application on the GCP CSP.
Perform the necessary steps to procure the pre-requisite access and information to use the automated deployment scripts.
Create one or more deployments of the Tokkio Application using the automated deployment scripts.
Verify the deployment.
Tear down the created infrastructure when no longer required.
Infrastructure Layout#
Tokkio Application setup on GCP requires several GCP resources to be created such as Virtual Machines, Firewall, Load balancers, Cloud Storage Bucket for hosting UI content ..etc. While there are several patterns that can be followed to bring up infrastructure needed for Tokkio, here is one way we will be working to achieve.

In addition to bringing up GCP resources, we also will have to work on downloading Tokkio application and its dependency artifacts, configure them and install.
These automation scripts will help you simplify that by abstracting out the complexity and allowing user to work with majorly 2 files viz., deploy-template.yml
and secrets.sh
deploy-template.yml
is an abstraction of infra specification we need for bringing up Tokkio Application. At a high level, we define the base infrastructure specifications (e.g. VirtualNetwork); then add TURN infrastructure (e.g. Virtual Machine) and Application infrastructure specifications (e.g. GPU Virtual Machine).
TURN infrastructure and Application infrastructure will be established on base infrastructure specified on this deploy-template.yml
.
secrets.sh
will be used as mechanism for user to provide secrets.
It takes in secrets needed for application configuration such as ‘turn server password’, ‘openapi key’, etc. and use them while installing application.
Note
We will skip some optional features such as Auto Scaling in this reference setup.
Important
Many of the resources in this setup may not fall in Free tier, you can check GCP Cost Management pages for understanding cost implications.
Prerequisites#
This setup guide assumes you have the following conditions met:
Access to the GCP Console via a user with Admin access to at least one Project.
Ubuntu 20.04 or 22.04 based machine, on a user with sudo privileges to run the automated deployment scripts.
A GCP service account to enable the automated deployment scripts to authenticate themselves with.
A GCP Cloud Storage bucket to host the state of the automated deployment scripts, so that the created infrastructure may be modified or torn down at a later date or time.
Registration of a domain on which the Tokkio Application can be hosted.
Setup Nvidia GPU Cloud (NGC) API Key by following instructions here.
SSH key pair to access the instances we are going to setup.
You may use existing SSH Key pair for this access or create a new pair.
Reference documentation to create a public private ssh key pair is available here.
Note
The same pre-requisites provisioned here can be used for multiple projects, and can be considered as a one time setup for most scenarios unless the parameters are not acceptable for any deployment.
Login to the GCP Console#
Log into GCP Console as user with admin access.
Click on Navigation Menu in the left top corner to get to the page listing all products/services.
For all subsequent steps, navigate back to this page to find and create a new resource.
IAM & Admin Setup#
Service Account#
Select the IAM & Admin from the left navigation menu.
Select the Service Accounts.
Click on the Create Service Account button at the top of the Service Accounts page to create new service account.
In the wizard:
Go to section Service account details:
Service account name: Provide an appropriate name. (eg: <my-org>-tokkio-automation)
Service account ID: It will auto populated using service account name.
Service account description: Provide an appropriate description for service account.
Click on Create and Continue button
Go to section Grant this service account access to project * Select a Owner role and click on Continue.
Grant users access to this service account section is optional.
Click on DONE.
You will be automatically taken to the Service accounts pages. If not:
Select the IAM & Admin from the left navigation menu.
Select the Service Accounts from the left menu.
On Service accounts page, it will show all the service accounts available in this projects.
Identify the service account created using Email or Name field.
Click on created Service accounts from list and it will open page showing all details about service account.
Click on KEYS option available in middle of page.
Click on ADD KEY and select Create new key option and select JSON as Key type.
The key file will be automatically downloaded to your local machine. This Key file contains the private key needed to authenticate as the service account with GCP.
Deployment State Storage#
From the Navigation Menu page:
Select the Cloud Storage from the category (on the left).
Click on the +Create button to create a new storage account.
In the wizard:
In the Name your bucket section:
Provide globally unique name for the bucket. (We are creating this bucket to store deployment state.)
Optionally add Labels.
In the Choose where to store your data section: - Select location type as region. - Select appropriate region from drop-down list.
Leave all other sections as is.
Click CREATE.
Base Domain#
From the Search box:
Search the Cloud DNS and select Cloud DNS product from searched drop-down list.
Click on the +CREATE ZONE button to create a new domain.
In the wizard:
Select Zone Type as Public.
Enter the Zone name.
Provide the DNS name.
Optionally add description.
Leave all other sections as is.
Click CREATE.
Prepare deployment config#
Download & Extract Deployment Artifact#
Download One-click Setup Scripts
Using below commands, clone the repository and navigate to one-click script for GCP:
#clone repository https://github.com/NVIDIA/ACE.git git clone https://github.com/NVIDIA/ACE.git #navigate to aws folder cd ACE/workflows/tokkio/scripts/one-click/gcp
Prepare secrets#
- The file secrets.sh can be setup as below so as not have to commit and push sensitive data as part of deploy-template.yml
secrets.sh
#!/bin/bash # SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: LicenseRef-NvidiaProprietary # # NVIDIA CORPORATION, its affiliates and licensors retain all intellectual # property and proprietary rights in and to this material, related # documentation and any modifications thereto. Any use, reproduction, # disclosure or distribution of this material and related documentation # without an express license agreement from NVIDIA CORPORATION or # its affiliates is strictly prohibited. # _ssh_public_key -> Your public ssh key's content export _ssh_public_key='<replace_content_between_quotes_with_your_value>' # _ngc_api_key -> Your ngc api key value export _ngc_api_key='<replace_content_between_quotes_with_your_value>' # _turnserver_password -> Password for turn server export _turnserver_password='<replace_content_between_quotes_with_your_value>' # Set the open ai API key export _openai_api_key='<replace_content_between_quotes_with_your_value>'
Important
You may want to be careful on whether or not to commit this file to your version control system as it contains secrets.
Prepare deploy template#
- Deploy Template Schema & Configuration
Deploy template
deploy-template.yml
is used to compile the infrastructure needed to setup your project/environment(s). It has separate sections to capture details for different needs such as provider config, TURN-infra etc. As shown in below layout diagram, you can choose to create one or more environments and infrastructure(s) under single project name.Override the content of
deploy-template.yml
file with your environment/application specific values. This will drive the configuration of Infrastructure and application being installed.deploy-template.yml
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: LicenseRef-NvidiaProprietary # # NVIDIA CORPORATION, its affiliates and licensors retain all intellectual # property and proprietary rights in and to this material, related # documentation and any modifications thereto. Any use, reproduction, # disclosure or distribution of this material and related documentation # without an express license agreement from NVIDIA CORPORATION or # its affiliates is strictly prohibited. # NOTE: Refer to examples for various configuration options project_name: '<replace-with-unique-name-to-identify-your-project>' description: '<add-a-brief-description-about-this-project>' template_version: '0.4.0' csp: 'gcp' backend: bucket: '<replace-with-pre-created-deployment-state-bucket-name>' credentials: '<replace-with-absolute-path-to-service-account-key-with-access-to-the-deployment-state-bucket>' provider: project: '<replace-with-the-name-of-the-gcp-project-to-create-resources-in>' credentials: '<replace-with-absolute-path-to-service-account-key-with-admin-access-to-the-project>' spec: location: '<replace-with-gcp-location-to-create-resources-in>' region: '<replace-with-gcp-region-to-create-resources-in>' zone: '<replace-with-gcp-zone-to-create-resources-in>' ui_bucket_location: location: '<replace-with-gcp-location-to-create-ui-bucket-in>' region: '<replace-with-gcp-region-to-create-ui-bucket-in>' alternate_region: '<replace-with-gcp-region-to-be-used-to-create-ui-bucket-that-need-dual-region-in>' network_cidr_range: '<replace-with-an-available-cidr-range>' ssh_public_key: '${_ssh_public_key}' dev_access_cidrs: - '<replace-with-list-of-dev-ip-cidrs>' user_access_cidrs: - '<replace-with-list-of-user-ip-cidrs>' dns_zone_name: '<replace-with-the-name-of-the-dns-zone-under-which-apps-will-be-registered>' api_sub_domain: '<replace-with-the-subdomain-to-be-used-for-the-api>' ui_sub_domain: '<replace-with-the-subdomain-to-be-used-for-the-ui>' elastic_sub_domain: '<replace-with-the-subdomain-to-be-used-for-the-elastic>' kibana_sub_domain: '<replace-with-the-subdomain-to-be-used-for-the-kibana>' grafana_sub_domain: '<replace-with-the-subdomain-to-be-used-for-the-grafana>' enable_cdn: true # instance_image: "<replace-with-ubuntu22.04-instance-image-name-defaults-to-'ubuntu-2204-jammy-v20240319'>" # api_instance_machine_type: "<replace-with-machine-type-such-as-n1-standard-64-default-guest-accelerator-nvidia-tesla-t4>" # api_instance_data_disk_size_gb: 1024 # rp_instance_machine_type: "<defaults-to-e2-standard-8-change-if-desired>" # rp_instance_data_disk_size_gb: 1024 # turn_server_provider: "<one-of-allowed-implementation-rp|coturn>" ngc_api_key: '${_ngc_api_key}' # NOTE: value of _ngc_api_key assumed to be provided in secrets.sh # api_settings: # chart_org: "nvidia" # chart_team: "ucs-ms" # chart_name: "ucs-tokkio-audio-video-app" # chart_version: "4.1.0" # chart_namespace: # api_ns: '<replace-with-k8s-namespace-for-application-chart/objects-defaults-to-default>' # foundational_ns: '<replace-with-k8s-namespace-for-foundational-chart-defaults-to-foundational>' # openai_api_key: '${_openai_api_key}' # NOTE: value of _openai_api_key assumed to be provided in secrets.sh # cns_settings: # cns_version: '<replace-with-required-cns-version-defaults-to-11.0>' # cns_commit: '<replace-with-required-cns-git-hash-defaults-to-1abe8a8e17c7a15adb8b2585481a3f69a53e51e2>' # gpu_driver_settings: # gpu_driver_runfile_install: '<replace-with-true-or-false-defaults-to-true>' # gpu_driver_version: '<replace-with-gpu_driver_version-defaults-to-gpu_driver_version-coming-from-cns_values_${cns_version}.yaml-file' # NOTE: Uncomment and update below section in case turn_server_provider = rp # --- RP SETUP CONFIGURATION START --- # rp_settings: # chart_org: 'nvidia' # chart_team: 'ucs-ms' # chart_name: 'rproxy' # chart_version: '0.0.5' # cns_settings: # cns_version: '<replace-with-required-cns-version-defaults-to-11.0>' # cns_commit: '<replace-with-required-cns-git-hash-defaults-to-1abe8a8e17c7a15adb8b2585481a3f69a53e51e2>' # --- RP SETUP CONFIGURATION END --- # NOTE: Uncomment and update below section in case turn_server_provider = coturn # --- COTURN CONFIGURATION START --- # coturn_settings: # realm: '<replace-with-a-realm-to-use-for-the-turnserver>' # username: '<replace-with-a-username-to-use-for-the-turnserver>' # password: '${_turnserver_password}' # NOTE: value of _turnserver_password assumed to be provided in secrets.sh # --- COTURN CONFIGURATION END --- # NOTE: Uncomment any of the below lines based on the need to override # ui_settings: # resource_org: "nvidia" # resource_team: "ucs-ms" # resource_name: "tokkio_ui" # resource_version: "4.0.4" # resource_file: "ui.tar.gz" # countdown_value: "90" # enable_countdown: false/true # enable_camera: false/true # app_title: "<your-custom-title>" # application_type: "<choice-of-app-type-e.g. qsr>" # overlay_visible: <ui-settings-true|false> # ui_window_visible: <ui-settings-true|false>
Note
To use Tokkio LLM reference app you need to update at least these properties
spec > api_settings > chart_name: "ucs-tokkio-audio-video-llm-app"
andspec > ui_settings > application_type: "custom"
. For QSR app, you need to have at least``spec > api_settings > openai_api_key`` properties.Explanation of each and every entry of this yml file are explained in below table.
Deploy Template# Parameter name
Type
Optional
Description
project_name
string
A unique name to identify the project. This is important to tear down resources later.
description
string
A brief description of the project.
template_version
string
0.4.0
csp
string
gcp
backend
map
Backend configuration.
backend > bucket
string
Name of the GCS bucket in which state of the resources provisioned is to be stored.
backend > credentials
string
Absolute path of the GCP service account key with access to state bucket.
provider
map
Provider configuration.
provider > project
string
GCP project ID where resources will be provisioned.
provider > credentials
string
Absolute path of the GCP service account key with access to provision resources.
spec
map
Application configuration
spec > location
string
GCP location Code to be used to select region.
spec > region
string
GCP region to be used to provision .
spec > zone
string
Deployment zone to be used within a region.
spec > ui_bucket_location
map
Location & Region specifications for hosting static web/ui using bucket.
spec > location
string
GCP location Code to be used to select regions.
spec > region
string
First region out of dual region to be used for bucket hosting static web/ui.
spec > alternate_region
string
Second region out of dual region to be used for bucket hosting static web/ui.
spec > network_cidr_range
string
Private CIDR range in which base, coturn and app resources will be created.
spec > ssh_public_key
string
Content of the public key of the ssh key-pair used for instance access. Prefer to provide via variable in secrets.sh.
spec > dev_access_cidrs
array
CIDR ranges from where SSH access should be allowed.
spec > user_access_cidrs
array
CIDR ranges from where application UI and API will be allowed access.
spec > dns_zone_name
string
DNS zone name under which applications will registered.
spec > api_sub_domain
string
Subdomain to be used for API endpoint.
spec > ui_sub_domain
string
Subdomain to be used for UI endpoint.
spec > elastic_sub_domain
string
Subdomain to be used for Elastic endpoint.
spec > kibana_sub_domain
string
Subdomain to be used for Kibana endpoint.
spec > grafana_sub_domain
string
Subdomain to be used for grafana endpoint.
spec > enable_cdn
bool
True if UI needs to be served via a CDN cache. False if UI need not be served from a CDN cache.
spec > instance_image
string
yes
Instance image name of Ubuntu 22.04 to be used. Defaults to ubuntu-2204-jammy-v20240319.
spec > api_instance_machine_type
string
yes
VM instance machine type to be used for API. Defaults to n1-standard-64 with guest accelerator nvidia-tesla-t4.
spec > api_instance_data_disk_size_gb
string
yes
Data disk size for this API instance. Defaults to 1024.
spec > rp_instance_machine_type
string
yes
VM instance type to be used for RP instance. Defaults to e2-standard-8.
spec > rp_instance_data_disk_size_gb
string
yes
Data disk size for this RP instance. Defaults to 1024.
spec > turn_server_provider
string
yes
Either rp or coturn. Defaults to rp.
spec > ngc_api_key
string
NGC API key with access to deployment artifacts. Prefer to provide via variable in secrets.sh.
spec > api_settings
map
Configuration to change the default API chart used.
spec > api_settings > chart_org
string
yes
NGC Org of the API chart to be used.
spec > api_settings > chart_team
string
yes
NGC Team of the API chart to be used.
spec > api_settings > chart_name
string
yes
NGC Resource Name of the API chart to be used. Defaults to ucs-tokkio-audio-video-app (QSR).
spec > api_settings > chart_version
string
yes
NGC Resource Version of the API chart to be used.
spec > api_settings > chart_namespace
map
yes
Kubernetes namespace configuration.
spec > api_settings > chart_namespace > api_ns
string
yes
Kubernetes namespace to be used deploy API chart.
spec > api_settings > chart_namespace > foundational_ns
string
yes
Kubernetes namespace to be used deploy foundational chart.
spec > api_settings > openai_api_key
string
yes
Open AI API key for the QSR app (https://openai.com/blog/openai-api). Prefer to provide via variable in secrets.sh.
spec > api_settings > cns_settings
map
yes
Nvidia Cloud Native Stack setting.
spec > api_settings > cns_settings > cns_version
string
yes
Nvidia cloud native stack version to be used. Defaults to 11.0
spec > api_settings > cns_settings > cns_commit
string
yes
Nvidia cloud native stack commit id to be used. Defaults to 1abe8a8e17c7a15adb8b2585481a3f69a53e51e2.
spec > api_settings > gpu_driver_settings
map
yes
GPU driver configuration.
spec > api_settings > gpu_driver_settings > gpu_driver_runfile_install
string
yes
Enable GPU driver installation using runfile. Defaults to true.
spec > api_settings > gpu_driver_settings > gpu_driver_version
string
yes
Configuration to change gpu driver version to be used. Defaults to version coming in cns.
spec > rp_settings
map
yes
Configuration to configure RP as turn server. Must provide these values when spec > turn_server_provider is set to rp or left to empty (defaults to rp).
spec > rp_settings > chart_org
string
NGC Org of the RP chart to be used.
spec > rp_settings > chart_team
string
NGC Team of the RP chart to be used.
spec > rp_settings > chart_name
string
NGC Resource Name of the RP chart to be used.
spec > rp_settings > chart_version
string
NGC Resource Version of the RP chart to be used.
spec > rp_settings > cns_settings
map
yes
Nvidia Cloud Native Stack setting.
spec > rp_settings > cns_settings > cns_version
string
yes
Nvidia cloud native stack version to be used. Defaults to 11.0
spec > rp_settings > cns_settings > cns_commit
string
yes
Nvidia cloud native stack commit id to be used. Defaults to 1abe8a8e17c7a15adb8b2585481a3f69a53e51e2.
spec > coturn_settings
map
yes
Configuration to configure COTURN as turn server. Must provide these values when spec > turn_server_provider is set to coturn.
spec > coturn_settings > realm
string
Realm name to be used during COTURN setup.
spec > coturn_settings > username
string
Username to be used to connect to COTURN server.
spec > coturn_settings > password
string
Password to be used to connect to COTURN server. Prefer to provide via variable in secrets.sh.
spec > ui_settings
map
yes
Configuration to change the default UI resource.
spec > ui_settings > resource_org
string
yes
NGC Org of the UI resource to be used.
spec > ui_settings > resource_team
string
yes
NGC Team of the UI resource to be used.
spec > ui_settings > resource_name
string
yes
NGC Resource Name of the UI resource to be used.
spec > ui_settings > resource_version
string
yes
NGC Resource Version of the UI resource to be used.
spec > ui_settings > resource_file
string
yes
NGC Resource File Name of the UI resource to be used.
spec > ui_settings > countdown_value
number
yes
Count down value in seconds. Defaults to 90.
spec > ui_settings > enable_countdown
bool
yes
Either true or false. ussed for Enabling Countdown parameter for UI. Defaults to false.
spec > ui_settings > enable_camera
bool
yes
Either true or false. used for Enabling Camera parameter for UI. Defaults to true.
spec > ui_settings > app_title
string
yes
Custom App Title for UI.
spec > ui_settings > application_type
string
yes
Custom Application Type for UI.
spec > ui_settings > overlay_visible
bool
yes
Either true or false. used to make Overlay visible in the UI. Defaults to true.
spec > ui_settings > ui_window_visible
bool
yes
Either true or false. used to make UI window visible. Defaults to false.
Setup logs backup#
Audit logs for any changes made via the script will be captured in a directory named logs
at the same level as the deploy-template.yml
Take necessary measures to ensure these are backed up in the event they are needed for debugging.
Note
Any values defined in secrets.sh
will be masked in the logs
Deploy infrastructure and application#
Use the below commands to Install / Update / Uninstall Tokkio application along with its infrastructure as per specs provided in deploy-template.
# To view available options bash tokkio-deploy # To preview changes based on deploy-template.yml without actually applying the changes bash tokkio-deploy preview # To install changes showed in preview option based on deploy-template.yml bash tokkio-deploy install # To show results/information about the project installed bash tokkio-deploy show-results # To uninstall the deployed infra and application bash tokkio-deploy uninstall
Important
Both install
and uninstall
Options needs to be run with care. We recommend preview
option to see the changes before install
.
If you are looking for an option to print the details of your past installation, use show-results
option.
Warning
Any attempts to suspend Ctrl + Z the running deployment will result in an inability to make changes to the project via the scripts as well as the need to manually cleanup resources created via the web console. Prefer terminating the process using Ctrl + C in case it has to absolutely be exited.
Known Issues#
On rare occasions, when we do stop and start of API instance, kubectl get pods commands gives below issues:
pods status is shown as Unknown
The connection to the server 10.0.0.165:6443 was refused - did you specify the right host or port?
Verify Deployment#
On successful deployment of Infra, you will be displayed output in a format as shown below.
Apply complete! Resources: <nn> added, <nn> changed, <nn> destroyed. Outputs: app_infra = { "<app_infra key>" = { "api_endpoint" = "https://<api_sub_domain>.<base_domain>" "private_ips" = [ "<private_ip_of_app_instace>", ] "ui_endpoint" = "https://<ui_sub_domain>.<base_domain>" } } bastion_infra = { "<bastion_infra key>" = { "private_ip" = "<bastion_instance_private_ip>" "public_ip" = "<bastion_instance_public_ip>" } } rp_infra = { "<rp_infra key" = { "private_ip" = "<rp_instance_private_ip>" "public_ip" = "<rp_instance_public_ip>" } }
Use ssh command in below format to log into Application virtual machine.
#Replace content between '<' and '>' with its appropriate values. #pem file refered here must the key associated to the public key used in initial steps of setup. ssh -i <path-to-pem-file> -o StrictHostKeyChecking=no -o ProxyCommand="ssh -i <path-to-pem-file> -W %h:%p -o StrictHostKeyChecking=no ubuntu@<bastion-vm-public-ip>" ubuntu@<app-vm-private-ip>
Once logged into the terminal, run below command to see the Kubernetes Pods’ statuses. All the pods should turn into Running
state eventually.
ubuntu@qsrrc3-1-api-1:~$ kubectl get pods NAME READY STATUS RESTARTS AGE a2f-a2f-deployment-6d9f4d6ddd-h6twk 1/1 Running 0 85m ace-agent-chat-controller-deployment-0 1/1 Running 0 85m ace-agent-chat-engine-deployment-687b4868c-whnrr 1/1 Running 0 85m ace-agent-plugin-server-deployment-7f7b7f848f-srf6k 1/1 Running 0 85m anim-graph-sdr-envoy-sdr-deployment-5c9c8d58c6-j8n9n 3/3 Running 0 85m chat-controller-sdr-envoy-sdr-deployment-77975fc6bf-frnkm 3/3 Running 0 85m ds-sdr-envoy-sdr-deployment-79676f5775-znkjc 3/3 Running 0 85m ds-visionai-ds-visionai-deployment-0 2/2 Running 0 85m ia-animation-graph-microservice-deployment-0 1/1 Running 0 85m ia-omniverse-renderer-microservice-deployment-0 1/1 Running 0 85m ia-omniverse-renderer-microservice-deployment-1 1/1 Running 0 85m ia-omniverse-renderer-microservice-deployment-2 1/1 Running 0 85m mongodb-mongodb-bc489b954-zp4dk 1/1 Running 0 85m occupancy-alerts-api-app-cfb94cb7b-tvlkh 1/1 Running 0 85m occupancy-alerts-app-5b97f578d8-gfxsj 1/1 Running 0 85m redis-redis-5cb5cb8596-ph5rt 1/1 Running 0 85m redis-timeseries-redis-timeseries-55d476db56-7zktj 1/1 Running 0 85m renderer-sdr-envoy-sdr-deployment-5d4d99c778-dlp4l 3/3 Running 0 85m riva-speech-57dbbc9dbf-995jh 1/1 Running 0 85m tokkio-cart-manager-deployment-55476f746b-gkczz 1/1 Running 0 85m tokkio-ingress-mgr-deployment-7cc446758f-8hc7j 3/3 Running 0 85m tokkio-menu-api-deployment-748c8c6574-f4wsh 1/1 Running 0 85m tokkio-ui-server-deployment-55fcbdd9f4-6mdv5 1/1 Running 0 85m tokkio-umim-action-server-deployment-74977db6d6-2867v 1/1 Running 0 85m triton0-766cdf66b8-hpsqw 1/1 Running 0 85m vms-vms-bc7455786-v2kd5 1/1 Running 0 85m
Note
Based on several conditions, Pods may take up to 50-60 mins to turn into Running
state.
Once all the pods are in Running
state, try accessing the UI with the help of URL printed in output attribute ui_endpoint
. Example ui_endpoint
https://toc-one-ui.GCPedge.net.
When you try your URL on browser, you should be able to see Tokkio application coming up at this point.

Teardown infrastructure and application#
To teardown all the infrastructure along with application that we created thru above scripts, run bash tokkio-deploy uninstall
command.
Important
Both install
and uninstall
Options needs to be run with care. We recommend preview
option to see the changes before install
.
If you are looking for an option to print the details of your past installation, use show-results
option.