Appendix - Sim2Deploy Cloud Quickstart
Deployment Config File Samples
secret.sh
#!/bin/bash
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
# _aws_access_key_id -> AWS access key id to create resources
export _aws_access_key_id='<replace_content_between_quotes_with_your_value>'
# _aws_secret_access_key -> AWS secret access key to create resources
export _aws_secret_access_key='<replace_content_between_quotes_with_your_value>'
# _ssh_public_key -> Your public ssh key's content
export _ssh_public_key='<replace_content_between_quotes_with_your_value>'
# _ngc_api_key -> Your ngc api key value
export _ngc_api_key='<replace_content_between_quotes_with_your_value>'
# _turnserver_password -> Password for turn server
export _turnserver_password='<replace_content_between_quotes_with_your_value>'
deploy-template.yml
# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. # SPDX-License-Identifier: LicenseRef-NvidiaProprietary # # NVIDIA CORPORATION, its affiliates and licensors retain all intellectual # property and proprietary rights in and to this material, related # documentation and any modifications thereto. Any use, reproduction, # disclosure or distribution of this material and related documentation # without an express license agreement from NVIDIA CORPORATION or # its affiliates is strictly prohibited. # NOTE: Refer to examples for various configuration options project_name: '<replace-with-unique-name-to-identify-your-project>' description: '<add-a-brief-description-about-this-project>' template_version: '0.1.1' csp: 'aws' backend: encrypt: true dynamodb_table: '<replace-with-pre-created-deployment-state-dynamo-db-table-name>' bucket: '<replace-with-pre-created-deployment-state-bucket-name>' region: '<replace-with-aws-region-where-pre-created-deployment-state-bucket-exists>' access_key: '${_aws_access_key_id}' # NOTE: value of _access_key assumed to be provided in secrets.sh secret_key: '${_aws_secret_access_key}' # NOTE: value of _ secret_key: assumed to be provided in secrets.sh provider: region: '<aws-region-to-provision-infra>' access_key: '${_aws_access_key_id}' # NOTE: value of _access_key assumed to be provided in secrets.sh secret_key: '${_aws_secret_access_key}' # NOTE: value of _ secret_key: assumed to be provided in secrets.sh base_infra: # NOTE: Repeat below section for as many base setups as necessary base: ### Change this any name used for base else it will take "base" as default spec: vpc_cidr_block: '10.0.0.0/24' ssh_public_key: '${_ssh_public_key}' dev_access_ipv4_cidr_blocks: - '<replace-with-list-of-user-ip-cidrs>' ### Change this to your VPN CIDRs to allow access to provisioned infra user_access_ipv4_cidr_blocks: - '<replace-with-list-of-user-ip-cidrs>' ### Change this to your VPN CIDRs to allow access to provisioned infra coturn_infra: # NOTE: Repeat below section for as many COTURN environments as necessary coturn: ### Change this any name used for coturn else it will take "coturn" as default base_ref: 'base' # NOTE: should match the name of a base env defined in the above section spec: turnserver_realm: 'test-domain.com' turnserver_username: '<coturn-username>' turnserver_password: '${_turnserver_password}' # NOTE: value of _turnserver_password assumed to be provided in secrets.sh app_infra: # NOTE: Repeat below section for as many app environments as necessary mtmcsdg-1: base_ref: 'base' # NOTE: should match the name of a base env defined in the above section # NOTE: Uncomment below line in case app environment should use one of the setup COTURN environments coturn_ref: 'coturn' # NOTE: should match the name of a COTURN env with same base ref defined in the above section spec: ngc_api_key: '${_ngc_api_key}' # NOTE: value of _ngc_api_key assumed to be provided in secrets.sh # NOTE: Uncomment any of the below lines based on the need to override # --- OPTIONAL CONFIGURATION START --- app_instance_type: 'g5.48xlarge' ## 8x A10 GPUs VM type #app_instance_data_disk_size_gb: 1024 sdg_enable: true ### This is required to be set to true to be able to load with sample synthetic input data #app_sdg_data_ngc_resource_url: "nfgnkvuikvjm/mdx-v2-0/metropolis-apps-sample-input-data:v2.1-06132024" #app_ngc_k8s_values_res_url: "nfgnkvuikvjm/mdx-v2-0/metropolis-apps-k8s-deployment:v2.1-06142024" foundational_chart: org: "nfgnkvuikvjm" team: "mdx-v2-0" name: "mdx-foundation-sys-svcs" version: "v1.3" #foundational_override_values_file: ./../k8s-helm-values/foundational-sys/foundational-sys-monitoring-override-values.yaml ## by default addtional disk will be mounted on "/opt" if that needs to be changed only foundational override need, esle leave commented app_chart: org: "nfgnkvuikvjm" team: "mdx-v2-0" name: "mdx-mtmc-app" version: "1.0.37" app_override_values_file: /../k8s-helm-values/MTMC-RTLS-SDG/mtmc-rtls-app-override-values.yaml nvstreamer_app_chart: org: "rxczgrvsg8nx" team: "vst-1-0" name: "nvstreamer" version: "0.2.32" nvstreamer_app_override_values_file: ./../k8s-helm-values/MTMC-RTLS-SDG/nvstreamer-with-ingress-values.yaml vst_app_chart: org: "rxczgrvsg8nx" team: "vst-1-0" name: "vst" version: "1.0.30" vst_app_override_values_file: ./../k8s-helm-values/MTMC-RTLS-SDG/vst-app-with-ingress-values.yaml ds_app_chart: org: "nfgnkvuikvjm" team: "mdx-v2-0" name: "mdx-wdm-ds-app" version: "0.0.33" wdm_ds_app_override_values_file: ./../k8s-helm-values/MTMC-RTLS-SDG/wdm-deepstream-mtmc-rtls-values.yaml target_group_configs: ### ## Uncomment only if required additional k8s node ports needs to be exposed for other app related endpoint example - kibana on port 31560 kibana-tg: port: 31560 health_check: port: 31080 path: "/elastic" grafana-tg: port: 32300 health_check: port: 32300 path: "/api/health" # ---- OPTIONAL CONFIGURATION END ----
Parameter Explanation
Explanation for each parameter can be found here.
sdg-deploy.txt
Deployment template
sdg-deploy.txt
is used to compile the synthetic data generation (SDG) workflow infrastructure needed to setup your project/environment(s).Override the content of
sdg-deploy.txt
file with your environment/application specific values. This will drive the configuration of Infrastructure and application being installed.sdg-deploy.txt
--deployment-name=<deploy-name> --region '<aws-region-name>' --isaac --isaac-instance-type 'g5.12xlarge' --isaac-image 'nvcr.io/nvidia/isaac-sim:4.0.0' --oige 'no' --orbit 'no' --isaaclab 'no' --ngc-api-key '<ngc-api-key>' --ngc-api-key-check --aws-access-key-id '<aws-access-key>' --aws-secret-access-key '<aws-secret-key>' --no-ovami --existing 'ask'
Parameter Explanation
All the entries of this config file are explained in the table below:
SDG Deploy Template Parameter name
Type
Optional
Description
deployment-name
string
Deployment name for SDG App.
region
string
AWS Region to deploy the infrastructure required for sdg.
isaac
To Deploy/configure Isaac Sim in provisioned VM so as to run simulation and generate new data. Defaults to “yes”
isaac-instance-type
string
yes
AWS VM type to use for SDG App. By default it is configured to use 4xA10 GPU VM - g5.12xlarge.
isaac-image
string
yes
Docker Image to use for isaacsim deployment. We will use latest released docker supported for simulation workflow.
isaaclab
To Deploy/configure Isaac Sim Lab in provisioned VM so as to run simulation and generate new data. Defaults to “no”
ngc-api-key
string
NGC API Key to pull docker from NGC team - mdx-v2.0
aws-access-key-id
string
AWS Access Key ID for accessing and provisiong infra in AWS CSP.
aws-secret-access-key
string
AWS Secret Key ID for authenticating and provisiong infra in AWS CSP.
Sample Output for Bucket Details
Apply complete! Resources: <nn> added, <nn> changed, <nn> destroyed. Outputs: S3_Bucket_details = { "<bastion_infra key>" = "<S3_Bucket_Name>" } app_infra = { "<app_infra key>" = { "private_ips" = [ "<private_ip_of_app_instace>", ] } } app_infra = { "<app_infra key>" = { alb_dns_name = <dns_name_for_aws_lb> } } bastion_infra = { "<bastion_infra key>" = { "private_ip" = "<bastion_instance_private_ip>" "public_ip" = "<bastion_instance_public_ip>" } } coturn_infra = { "<coturn_infra key" = { "port" = 3478 "private_ip" = "<coturn_instance_private_ip>" "public_ip" = "<coturn_instance_public_ip>" } }
Verify SDG Deployment
Outputs: cloud = "aws" isaac_ip = "<public-ip-aws-vm>" isaac_vm_id = "<VM-Resource-ID>" ovami_ip = "NA" ssh_key = <sensitive> ************************************************* * Isaac Sim is deployed at <AWS-VM-Public-IP> * ************************************************* * To connect to Isaac Sim via SSH: ssh -i state/<deployment-name>/key.pem -o StrictHostKeyChecking=no ubuntu@<AWS-VM-Public-IP> * To connect to Isaac Sim via noVNC: 1. Open http://<AWS-VM-Public-IP>:6080/vnc.html?host=<AWS-VM-Public-IP>&port=6080 in your browser. 2. Click "Connect" and use password "<random-generate-password>" * To connect to Isaac Sim via NoMachine: 0. Download NoMachine client at https://downloads.nomachine.com/, install and launch it. 1. Click "Add" button. 2. Enter Host: "<AWS-VM-Public-IP>". 3. In "Configuration" > "Use key-based authentication with a key you provide", select file "state/sdg-test-isaac-1/key.pem". 4. Click "Connect" button. 5. Enter "ubuntu" as a username when prompted.
Use ssh command in below format to log into Application instance.
Replace content between '<' and '>' with its appropriate values. #pem file refered here must the key associated to the public key used in initial steps of setup. ssh -i <path-to-pem-file> -o StrictHostKeyChecking=no -o ProxyCommand="ssh -i <path-to-pem-file> -W %h:%p -o StrictHostKeyChecking=no ubuntu@<bastion-vm-public-ip>" ubuntu@<app-vm-private-ip> #To connect to Isaac Sim via SSH: ssh -i state/<deployment-name>/key.pem -o StrictHostKeyChecking=no ubuntu@<AWS-VM-Public-IP>
Once logged into the terminal, run below command to see the docker status for Isaac Sim container.
$ docker ps $ docker logs isaacsim [127.981s] app ready [128.169s] Isaac Sim App is loaded. <<=== this log line means isaacsim UI is fully operational. It can be accessed using NoMachine or VNC. $
Note
Based on several conditions, Isaac SIM UI may take up to 15-20 mins to turn into Active
state.
Workaround for SDG Deploy Error
Deploy Error seen
[Isaac Automator v3.0.0]:/app$ cat sdg-deploy.txt | xargs ./deploy-aws --existing repair
* Deploymemnt exists, what would you like to do? See --help for details. (repair, modify, replace, run_ansible) [replace]: Aborted!
[Isaac Automator v3.0.0]:/app$ cat sdg-deploy.txt | xargs ./deploy-aws --existing=repair
* Deploymemnt exists, what would you like to do? See --help for details. (repair, modify, replace, run_ansible) [replace]: Aborted!
Workaround
While running the SDG deploy workflow script, if you see the above error for an existing deployment, please clean up the existing deployment using
./destroy <deployment-name>
or update sdg-deploy.txt with a new deployment name.Users can update the existing deployment by simply running
./deploy-aws
without the xargs command and providing the configs at runtime. Please make sure the parameters match those supplied in the filesdg-deploy.txt
when running the deployment for the first time using the commandcat sdg-deploy.txt | xargs ./deploy-aws
.
Tear Down Deployment
To tear down all the infrastructure along with application that we created through above scripts,
run bash mtmc-app-deploy uninstall
command for RTLS app teardown.
Important
Both install
and uninstall
Options needs to be run with care. We recommend preview
option to see the changes before install
.
If you are looking for an option to print the details of your past installation, use show-results
option.
$ ./destroy <deployment-name>
Note
Please run destroy command from where deployment was triggered as state files are kept locally for SDG deployment.