AWS CSP Automated Setup Guide

Overview

In this guide, we will go through the steps needed to:

  • Understand the architecture of the infrastructure we will be setting up to host the Tokkio Application on the AWS CSP.

  • Perform the necessary steps to procure the pre-requisite access and information to use the automated deployment scripts.

  • Create one or more deployments of the Tokkio Application using the automated deployment scripts.

  • Verify the deployment.

  • Tear down the created infrastructure when no longer required.

Infrastructure Layout

Tokkio Application setup on AWS requires several AWS resources to be created such as EC2 instances, Security Groups, Application load balancer, CloudFront for hosting UI content, etc. While there are several patterns that can be followed to bring up infrastructure needed for Tokkio, here is one way we will be working to achieve.


AWS Infrastructure Layout

In addition to bringing up AWS resources, we also will have to work on downloading Tokkio application and its dependency artifacts, configure them and install. These automation scripts will help you simplify that by abstracting out the complexity and allowing user to work with majorly 2 files viz., deploy-template.yml and secrets.sh.

deploy-template.yml is an abstraction of infra specification we need for bringing up Tokkio Application. At a high level, we define the base infrastructure specifications (e.g. VPC); then add TURN infrastructure (e.g. EC2 instance) and Application infrastructure specifications (e.g. GPU instance). TURN infrastructure and Application infrastructure will be established on base infrastructure specified on this deploy-template.yml.

secrets.sh will be used as mechanism for user to provide secrets of two categories. Secrets such as ‘AWS secret access key’ so that automation program can interact with AWS account.

Note

We will skip some optional features such as Auto Scaling in this reference setup.

Important

Many of the resources in this setup may not fall in Free tier, you can check AWS billing reference pages for understanding cost implications.

Prerequisites

This setup guide assumes you have the following conditions met:

  • AWS access Keys for IAM user

  • On your AWS account, procure access key ID and secret access key for programmatic access to your AWS resources.

  • Prefer to obtain a non root IAM user with administrator access.

  • Refer to the AWS documentation to create access key

  • S3 Bucket for Backend state

  • This script uses S3 buckets to store the references to the resources that it spins up.

  • Create an S3 bucket to be used to store the deployment state.

  • Ensure the bucket is not public accessible but rather only to your account (such as using the keys procured in the previous step).

  • Refer to the AWS documentation

  • DynamoDB Table for Backend state

  • This script uses DynamoDB tables to prevent concurrent access to the same deployment as they are being spun up.

  • Create a DynamoDB table to be used to manage access to the deployment state.

  • Define the Partition key as LockID and type String.

  • The Sort key need not be defined.

  • Reference AWS documentation.

  • Domain and Route53 hosted zone to deploy applications under

  • Tokkio application needs a Domain and Route53 hosted zone to support HTTPS

  • Create a domain or make Route53 the DNS service for your existing domain.

  • Refer to the AWS documentation development guide

  • Access to an Ubuntu 20.04 or 22.04 based machine, on a user with sudo privileges to run the automated deployment scripts.

  • Setup Nvidia GPU Cloud (NGC) API Key by following these instructions.

  • SSH key pair to access the instances we are going to setup.

  • You may use existing SSH Key pair for this access or create a new pair.

  • See the documentation to create a public private ssh key pair.

Note

The same pre-requisites provisioned here can be used for multiple projects, and can be considered as a one time setup for most scenarios unless the parameters are not acceptable for any deployment.

Prepare deployment config

Download & Extract Deployment Artifact

  • Download One-click Setup Scripts

  • Using below commands, clone the repository and navigate to one-click script for AWS:

#clone repository https://github.com/NVIDIA/ACE.git
git clone https://github.com/NVIDIA/ACE.git

#navigate to aws folder
cd ACE/workflows/tokkio/scripts/one-click/aws

Prepare secrets

The file secrets.sh can be setup as below so as not have to commit and push sensitive data as part of deploy-template.yml

secrets.sh

#!/bin/bash

# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.

# _aws_access_key_id -> AWS access key id to create resources
export _aws_access_key_id='<replace_content_between_quotes_with_your_value>'
# _aws_secret_access_key -> AWS secret access key to create resources
export _aws_secret_access_key='<replace_content_between_quotes_with_your_value>'
# _ssh_public_key -> Your public ssh key's content
export _ssh_public_key='<replace_content_between_quotes_with_your_value>'
# _ngc_api_key -> Your ngc api key value
export _ngc_api_key='<replace_content_between_quotes_with_your_value>'
# _openai_api_key -> Your openai API key 
export _openai_api_key='<replace_content_between_quotes_with_your_value>'
# _coturn_password -> Password for coturn server
export _coturn_password='<replace_content_between_quotes_with_your_value>'

Important

You may want to be careful on whether or not to commit this file to your version control system as it contains secrets.

Prepare deploy template

Deploy Template Schema & Configuration

Deploy template deploy-template.yml is used to compile the infrastructure specification needed to setup your project/environment(s). It has separate sections to capture details for different needs such as provider config, backend config, api settings etc. As shown in below layout diagram, you can create one unique environment per deploy-template.yml and it is identified by project_name.

Deployment Template structure

Override the content of deploy-template.yml file with your environment/application specific values. This will drive the configuration of Infrastructure and application being installed.

deploy-template.yml

# SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.

# NOTE: Refer to examples for various configuration options

project_name: '<replace-with-unique-name-to-identify-your-project>'
description: '<add-a-brief-description-about-this-project>'
template_version: '0.4.0'
csp: 'aws'
backend:
  encrypt: true
  dynamodb_table: '<replace-with-pre-created-deployment-state-dynamo-db-table-name>'
  bucket: '<replace-with-pre-created-deployment-state-bucket-name>'
  region: '<replace-with-aws-region-where-pre-created-deployment-state-bucket-exists>'
  access_key: '${_aws_access_key_id}'
  secret_key: '${_aws_secret_access_key}'
provider:
  region: '<replace-with-aws-region-to-create-resources-in>'
  access_key: '${_aws_access_key_id}'
  secret_key: '${_aws_secret_access_key}'
spec:
  vpc_cidr_block: '<replace-with-an-available-cidr-range>'
  ssh_public_key: '${_ssh_public_key}'
  dev_access_ipv4_cidr_blocks:
    - '<replace-with-list-of-dev-ip-cidrs>'
  user_access_ipv4_cidr_blocks:
    - '<replace-with-list-of-user-ip-cidrs>'
  base_domain: '<replace-with-the-dns-hosted-zone-under-which-apps-will-be-registered>'
  api_sub_domain: '<replace-with-the-subdomain-to-be-used-for-the-api>'
  ui_sub_domain: '<replace-with-the-subdomain-to-be-used-for-the-ui>'
  # elastic_sub_domain: 'replace-with-subdomain-for-elastic>'
  # kibana_sub_domain: '<replace-with-subdomain-for-kibana>'
  # grafana_sub_domain: '<replace-with-subdomain-for-grafana>'
  # ami_name : <replace-with-ubuntu22.04-ami-name-defaults-to-'ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*'>
  # app_instance_type: 'g4dn.12xlarge | g5.12xlarge'
  # app_instance_data_disk_size_gb: 1024
  # rp_instance_type: '<defaults-to-t3.large-change-if-desired>'
  # rp_instance_data_disk_size_gb: 1024
  # turn_server_provider: "<one-of-allowed-implementation-rp|coturn>"
  cdn_cache_enabled: false
  ngc_api_key: '${_ngc_api_key}' # NOTE: value of _ngc_api_key assumed to be provided in secrets.sh
  api_settings:
  #  chart_org: "nvidia"
  #  chart_team: "ucs-ms"
  #  chart_name: "ucs-tokkio-audio-video-app"
  #  chart_version: "4.1.0"
  #  chart_namespace:
  #    api_ns: '<replace-with-k8s-namespace-for-application-chart/objects-defaults-to-default>'
  #    foundational_ns: '<replace-with-k8s-namespace-for-foundational-chart-defaults-to-foundational>'
  #  openai_api_key: '${_openai_api_key}' # NOTE: value of _openai_api_key assumed to be provided in secrets.sh
  #  cns_settings:
  #    cns_version: '<replace-with-required-cns-version-defaults-to-11.0>'
  #    cns_commit: '<replace-with-required-cns-git-hash-defaults-to-1abe8a8e17c7a15adb8b2585481a3f69a53e51e2>'
  #  gpu_driver_settings:
  #    gpu_driver_runfile_install: '<replace-with-true-or-false-defaults-to-true>'
  #    gpu_driver_version: '<replace-with-gpu_driver_version-defaults-to-gpu_driver_version-coming-from-cns_values_${cns_version}.yaml-file'

  # NOTE: Uncomment and update below section in case turn_server_provider = rp
  # --- RP SETUP CONFIGURATION START ---
  # rp_settings:
  #     chart_org: 'nvidia'
  #     chart_team: 'ucs-ms'
  #     chart_name: 'rproxy'
  #     chart_version: '0.0.5'
  #     cns_settings:
  #       cns_version: '<replace-with-required-cns-version-defaults-to-11.0>'
  #       cns_commit: '<replace-with-required-cns-git-hash-defaults-to-1abe8a8e17c7a15adb8b2585481a3f69a53e51e2>'
  # --- RP SETUP CONFIGURATION END ---

  # NOTE: Uncomment and update below section in case turn_server_provider = coturn
  # --- COTURN CONFIGURATION START ---
  # coturn_settings:
  #   realm: '<replace-with-a-realm-to-use-for-the-turnserver>'
  #   username: '<replace-with-a-username-to-use-for-the-turnserver>'
  #   password: '${_coturn_password}' # NOTE: value of _coturn_password assumed to be provided in secrets.sh
  # --- COTURN CONFIGURATION END ---

  # NOTE: Uncomment any of the below lines based on the need to override
  # ui_settings:
  #   resource_org: "nvidia"
  #   resource_team: "ucs-ms"
  #   resource_name: "tokkio_ui"
  #   resource_version: "4.0.4"
  #   resource_file: "ui.tar.gz"
  #   countdown_value: "90"
  #   enable_countdown: false/true
  #   enable_camera: false/true
  #   app_title: "<your-custom-title>"
  #   application_type: "<choice-of-app-type-eg-qsr>"
  #   overlay_visible: <ui-settings-true|false>
  #   ui_window_visible: <ui-settings-true|false>

Note

To use Tokkio LLM reference app you need to update at least these properties spec > api_settings > chart_name: "ucs-tokkio-audio-video-llm-app" and spec > ui_settings > application_type: "custom". For QSR app, you need to have at least``spec > api_settings > openai_api_key`` properties.

Explanation of each and every entry of this yaml file are explained in below table.

Deploy Template

Parameter name

Type

Optional

Description

project_name

string

A unique name to identify the project. This is important to tear down resources later.

description

string

A brief description of the project.

template_version

string

0.4.0

csp

string

aws

backend

map

Backend configuration.

backend > encrypt

bool

Whether to encrypt the state while stored in S3 bucket.

backend > dynamodb_table

string

Name of the AWS Dynamo DB table used to manage concurrent access to the state.

backend > bucket

string

Name of the AWS S3 bucket in which state of the resources provisioned is stored.

backend > region

string

AWS region where state S3 bucket and Dynamo DB table are created.

backend > access_key

string

AWS access key ID used to access the backend bucket and table. Prefer to provide via variable in secrets.sh.

backend > secret_key

string

AWS secret access key used to access the backend bucket and table. Prefer to provide via variable in secrets.sh.

provider

map

Provider configuration.

provider > region

string

AWS region where resources of the application will be deployed.

provider > access_key

string

AWS access key ID used to provision resources. Prefer to provide via variable in secrets.sh.

provider > secret_key

string

AWS secret access key used to provision resources. Prefer to provide via variable in secrets.sh.

spec

map

Application configuration.

spec > vpc_cidr_block

string

Private CIDR range in which base, turn and app resources will be created.

spec > ssh_public_key

string

Content of the public key of the ssh key-pair used for instance access. Prefer to provide via variable in secrets.sh.

spec > dev_access_ipv4_cidr_blocks

array

CIDR ranges from where SSH access should be allowed.

spec > user_access_ipv4_cidr_blocks

array

CIDR ranges from where application UI and API will be allowed access.

spec > base_domain

string

Route53 hosted zone name to be used as the base domain for the API and the UI.

spec > api_sub_domain

string

Sub-domain of the app API endpoint.

spec > ui_sub_domain

string

Sub-domain of the app UI endpoint.

spec > elastic_sub_domain

string

yes

Sub-domain of the Elastic endpoint.

spec > kibana_sub_domain

string

yes

Sub-domain of the Kibana endpoint.

spec > grafana_sub_domain

string

yes

Sub-domain of the Grafana endpoint.

spec > cdn_cache_enabled

bool

True if UI needs to be served via a CDN cache. False if UI need not be served from a CDN cache.

spec > ngc_api_key

string

NGC API key with access to deployment artifacts. Prefer to provide via variable in secrets.sh.

spec > ami_name

string

yes

AMI name of Ubuntu 22.04 to used. defaults to Defaults to ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*

spec > app_instance_type

string

yes

AWS instance type to be used for API instance. Defaults to g4dn.12xlarge.

spec > app_instance_data_disk_size_gb

string

yes

Data disk size for this API instance. Defaults to 1024

spec > turn_server_provider

string

yes

Either rp or coturn. Defaults to rp.

spec > rp_instance_type

string

yes

AWS instance type to be used for RP instance. Defaults to t3.large.

spec > rp_instance_data_disk_size_gb

string

yes

Data disk size for this RP instance. Defaults to 1024

spec > rp_settings

map

yes

Configuration to configure RP as turn server. Must provide these values when spec > turn_server_provider is set to rp or left to empty (defaults to rp).

spec > rp_settings > chart_org

string

NGC Org of the RP chart to be used.

spec > rp_settings > chart_team

string

NGC Team of the RP chart to be used.

spec > rp_settings > chart_name

string

NGC Resource Name of the RP chart to be used.

spec > rp_settings > chart_version

string

NGC Resource Version of the RP chart to be used.

spec > rp_settings > cns_settings

map

yes

Nvidia Cloud Native Stack configuration.

spec > rp_settings > cns_settings > cns_version

string

yes

Nvidia cloud native stack version to be used. Defaults to 11.0.

spec > rp_settings > cns_settings > cns_commit

string

yes

Nvidia cloud native stack commit id to be used. Defaults to 1abe8a8e17c7a15adb8b2585481a3f69a53e51e2.

spec > coturn_settings

map

yes

Configuration to configure COTURN as turn server. Must provide these values when spec > turn_server_provider is set to coturn.

spec > coturn_settings > realm

string

Realm name to be used during COTURN setup.

spec > coturn_settings > username

string

Username to be used to connect to COTURN server.

spec > coturn_settings > password

string

Password to be used to connect to COTURN server. Prefer to provide via variable in secrets.sh.

spec > api_settings

map

Configuration to change the default API chart used.

spec > api_settings > chart_org

string

yes

NGC Org of the API chart to be used.

spec > api_settings > chart_team

string

yes

NGC Team of the API chart to be used.

spec > api_settings > chart_name

string

yes

NGC Resource Name of the API chart to be used. Defaults to ucs-tokkio-audio-video-app (QSR).

spec > api_settings > chart_version

string

yes

NGC Resource Version of the API chart to be used.

spec > api_settings > chart_namespace

map

yes

Kubernetes namespace configuration.

spec > api_settings > chart_namespace > api_ns

string

yes

Kubernetes namespace to be used deploy API chart.

spec > api_settings > chart_namespace > foundational_ns

string

yes

Kubernetes namespace to be used deploy foundational chart.

spec > api_settings > openai_api_key

string

yes

Open AI API key for the QSR app (https://openai.com/blog/openai-api). Prefer to provide via variable in secrets.sh.

spec > api_settings > cns_settings

map

yes

Nvidia Cloud Native Stack configuration.

spec > api_settings > cns_settings > cns_version

string

yes

Nvidia cloud native stack version to be used. Defaults to 11.0

spec > api_settings > cns_settings > cns_commit

string

yes

Nvidia cloud native stack commit id to be used. Defaults to 1abe8a8e17c7a15adb8b2585481a3f69a53e51e2.

spec > api_settings > gpu_driver_settings

map

yes

GPU driver configuration.

spec > api_settings > gpu_driver_settings > gpu_driver_runfile_install

string

yes

Enable GPU driver installation using runfile. Defaults to true.

spec > api_settings > gpu_driver_settings > gpu_driver_version

string

yes

Configuration to change gpu driver version to be used. Defaults to version coming in cloud native stack.

spec > ui_settings

map

yes

Configuration to change the default UI resource used.

spec > ui_settings > resource_org

string

yes

NGC Org of the UI resource to be used.

spec > ui_settings > resource_team

string

yes

NGC Team of the UI resource to be used.

spec > ui_settings > resource_name

string

yes

NGC Resource Name of the UI resource to be used.

spec > ui_settings > resource_version

string

yes

NGC Resource Version of the UI resource to be used.

spec > ui_settings > resource_file

string

yes

NGC Resource File Name of the UI resource to be used.

spec > ui_settings > countdown_value

number

yes

Count down value in seconds. Defaults to 90.

spec > ui_settings > enable_countdown

bool

yes

Either true or false. used for Enabling Countdown parameter for UI. Defaults to false.

spec > ui_settings > enable_camera

bool

yes

Eeither true or false. used for Enabling Camera parameter for UI. Defaults to true.

spec > ui_settings > app_title

string

yes

Custom App Title for UI.

spec > ui_settings > application_type

string

yes

Custom Application Type for UI.

spec > ui_settings > overlay_visible

bool

yes

Either true or false. used to make Overlay visible in the UI. Defaults to true.

spec > ui_settings > ui_window_visible

bool

yes

Either true or false. used to make UI window visible. Defaults to false.

Setup logs backup

Audit logs for any changes made via the script will be captured in a directory named logs at the same level as the deploy-template.yml Take necessary measures to ensure these are backed up in the event they are needed for debugging.

Note

Any values defined in secrets.sh will be masked in the logs

Deploy infrastructure and application

Use the below commands to Install / Update / Uninstall Tokkio application along with its infrastructure as per specs provided in deploy-template.

# To view available options
bash tokkio-deploy

# To preview changes based on deploy-template.yml without actually applying the changes
bash tokkio-deploy preview

# To install changes showed in preview option based on deploy-template.yml
bash tokkio-deploy install

# To show results/information about the project installed
bash tokkio-deploy show-results

# To uninstall the deployed infra and application
bash tokkio-deploy uninstall

Important

Both install and uninstall Options needs to be run with care. We recommend preview option to see the changes before install. If you are looking for an option to print the details of your past installation, use show-results option.

Warning

Any attempts to suspend Ctrl + Z the running deployment will result in an inability to make changes to the project via the scripts as well as the need to manually cleanup resources created via the web console. Prefer terminating the process using Ctrl + C in case it has to absolutely be exited.

Known Issues

  • On rare occasions, when we do stop and start of API instance, kubectl get pods commands gives below issues:

    • pods status is shown as Unknown

    • The connection to the server 10.0.0.165:6443 was refused - did you specify the right host or port?

Verify Deployment

On successful deployment of Infra, you will be displayed output in a format as shown below.

Apply complete! Resources: <nn> added, <nn> changed, <nn> destroyed.

Outputs:

app_infra = {
  "api_endpoint" = "https://<api_sub_domain>.<base_domain>"
  "elasticsearch_endpoint" = "https://elastic-<project_name>.<base_domain>"
  "grafana_endpoint" = "https://grafana-<project_name>..<base_domain>"
  "kibana_endpoint" = "https://kibana-<project_name>..<base_domain>"
  "private_ips" = [
  "<private_ip_of_app_instace>",
  ]
  "ui_endpoint" = "https://<ui_sub_domain>.<base_domain>"
 }
bastion_infra = {
  "private_ip" = "<bastion_instance_private_ip>"
  "public_ip" = "<bastion_instance_public_ip>"
}
rp_infra = {
  "private_ip" = "<rp_instance_private_ip>"
  "public_ip" = "<rp_instance_public_ip>"
}

Use ssh command in below format to log into Application instance.


#Replace content between '<' and '>' with its appropriate values.
#pem file refered here must the key associated to the public key used in initial steps of setup.
ssh -i <path-to-pem-file> -o StrictHostKeyChecking=no -o ProxyCommand="ssh -i <path-to-pem-file>  -W %h:%p -o StrictHostKeyChecking=no ubuntu@<bastion-instance-public-ip>" ubuntu@<app-instance-private-ip>

Once logged into the terminal, run below command to see the Kubernetes Pods’ statuses. All the pods should turn into Running state eventually.

ubuntu@ip-10-0-0-135:~$  kubectl get pods
NAME                                                        READY   STATUS    RESTARTS   AGE
a2f-a2f-deployment-6d9f4d6ddd-n6gc9                         1/1     Running   0          71m
ace-agent-chat-controller-deployment-0                      1/1     Running   0          71m
ace-agent-chat-engine-deployment-687b4868c-dx7z8            1/1     Running   0          71m
ace-agent-plugin-server-deployment-7f7b7f848f-58l9z         1/1     Running   0          71m
anim-graph-sdr-envoy-sdr-deployment-5c9c8d58c6-dh7qx        3/3     Running   0          71m
chat-controller-sdr-envoy-sdr-deployment-77975fc6bf-tw9b4   3/3     Running   0          71m
ds-sdr-envoy-sdr-deployment-79676f5775-64knd                3/3     Running   0          71m
ds-visionai-ds-visionai-deployment-0                        2/2     Running   0          71m
ia-animation-graph-microservice-deployment-0                1/1     Running   0          71m
ia-omniverse-renderer-microservice-deployment-0             1/1     Running   0          71m
ia-omniverse-renderer-microservice-deployment-1             1/1     Running   0          71m
ia-omniverse-renderer-microservice-deployment-2             1/1     Running   0          71m
mongodb-mongodb-bc489b954-jw45w                             1/1     Running   0          71m
occupancy-alerts-api-app-cfb94cb7b-lnnff                    1/1     Running   0          71m
occupancy-alerts-app-5b97f578d8-wtjws                       1/1     Running   0          71m
redis-redis-5cb5cb8596-gncxk                                1/1     Running   0          71m
redis-timeseries-redis-timeseries-55d476db56-2shpt          1/1     Running   0          71m
renderer-sdr-envoy-sdr-deployment-5d4d99c778-qm8sz          3/3     Running   0          71m
riva-speech-57dbbc9dbf-dmzpp                                1/1     Running   0          71m
tokkio-cart-manager-deployment-55476f746b-7xbrg             1/1     Running   0          71m
tokkio-ingress-mgr-deployment-7cc446758f-bz6kz              3/3     Running   0          71m
tokkio-menu-api-deployment-748c8c6574-z8jdz                 1/1     Running   0          71m
tokkio-ui-server-deployment-55fcbdd9f4-qwmtw                1/1     Running   0          71m
tokkio-umim-action-server-deployment-74977db6d6-sp682       1/1     Running   0          71m
triton0-766cdf66b8-6dsmq                                    1/1     Running   0          71m
vms-vms-bc7455786-6w7cz                                     1/1     Running   0          71m

Note

Based on several conditions, Pods may take up to 50-60 mins to turn into Running state.

Once all the pods are in Running state, try accessing the UI with the help of URL printed in output attribute ui_endpoint. Example ui_endpoint https://awsdemoui.csptokkiodemo.com. When you try your URL on browser, you should be able to see Tokkio application coming up at this point.

Application UI

Teardown infrastructure and application

To teardown all the infrastructure along with application that we created thru above scripts, run bash tokkio-deploy uninstall command.

Important

Both install and uninstall Options needs to be run with care. We recommend preview option to see the changes before install. If you are looking for an option to print the details of your past installation, use show-results option.