Getting Started

Once we install and start the container following Installation. The simple command to start AI-Assisted Annotation server inside the container is: start_aas.sh.

Usually you want to preserve your data and log by launching with workspace, for example:

Copy
Copied!
            

# Run with workspace # Remember we mount local disk to this /aiaa-experiments earlier start_aas.sh --workspace /aiaa-experiments/aiaa-1

Following are the options available while starting AI-Assisted Annotation Server.

Option

Default Value

Example

Description

BASIC

workspace

/var/nvidia/aiaa

--workspace /aiaa-experiments/aiaa-1

Workspace Location for AIAA Server to save all the configurations, logs and models. Hence always recommended to to use shared workspace while starting AIAA

debug

false

--debug true

Enable Debug level Logging for AIAA Server logs

auto_reload

false

--auto_reload true

AIAA Server will auto-reload models whenever the configuration is updated externally. This option should be enabled, when multiple AIAA Server are installed and sharing a common workspace

admin_hosts

*

--admin_hosts 10.20.1.23,122.34.1.3.4

Restrict client hosts who can send /admin requests to AIAA server (for e.g. managing models). * (star) will allow any; otherwise provide a list of client IP addresses/prefixes to allow

SSL Support

ssl

false

--ssl true

Run AIAA server in ssl mode

ssl_cert_file

--ssl_cert_file server.cert

SSL Certificate File Path

ssl_pkey_file

--ssl_pkey_file server.key

SSL Key File Path

Native/Triton

engine

TRITON

--engine AIAA

Use this option to run AIAA server in native mode (AIAA) or with Triton as background inference engine

triton_ip

localhost

--triton_ip 10.18.2.13

If anything other than 127.0.0.1 or localhost is specified, AIAA will not start a local Triton server for inference. Otherwise AIAA will start a local instance of Triton server

triton_port

8000

--triton_port 9001

Triton server port (http: 8000, grpc: 8001; monitor: 8002)

triton_proto

http

Triton server protocol (http or grpc)

triton_shmem

no

--triton_shmem cuda

Whether to use shared memory communication between AIAA and Triton (no, system, or cuda)

triton_model_path

<workspace dir>/triton_models

Triton models path

triton_start_timeout

120

Wait time in seconds for AIAA to make sure Triton server is up and running

triton_model_timeout

30

Wait time in seconds for AIAA to make sure model is loaded and up for serving in Triton

FineTune

fine_tune

false

--fine_tune true

If set to true, will fine tune the models in AIAA automatically based on samples directory (Model Fine-Tune Setup)

fine_tune_hour

0

--fine_tune_hour 1

The scheduled time (0~23) in each day to fine tune all the models

OTHERS

monitor

false

--monitor true

If API instrumentation is allowed. If enabled, you can view monitor dashboard by visiting http://host:port/dashboard

client_id

host-80

--client_id aiaa-instance-1

If you are deploying AIAA Server on multi node/AWS with shared disk, you might need this to identify each AIAA server accordingly

use_cupy

false

--use_cupy true

Use CUPY for performance boost when you have large GPU memory and more than 1 GPU

Note

For PyTorch support please check PyTorch Support in AIAA.

Note

Native mode means that AIAA does not run Triton inference server as backend. It uses native TensorFlow or PyTorch to run the inference directly.

Native mode is supported for compatibility and it also provides flexibility for users to write their own inference procedure.

Stop AIAA

The simple command to stop AI-Assisted Annotation server inside the container is: stop_aas.sh

Workspace

AIAA Server uses workspace directory (if not specified default shall be insider docker at the path: /var/nvidia/aiaa) for saving configs, models, logs etc. You can shutdown and run the docker multiple times, if you mount an external workspace path while running the docker. See the advanced options while running the docker.

Following are few files/folders explained from workspace directory. Note that INTERNAL means that file or directory is managed by AIAA server and we suggest not to change it.

File/Folder

Description

aiaa_server_config.json

INTERNAL - AIAA Server supports automatic loading of models and corresponding configs are saved in this file

aiaa_server_dashboard.db

INTERNAL - AIAA Web/API activities are monitored through Flask Dashboard and corresponding data is saved here

downloads/

INTERNAL - Temporary downloads from NGC happens here and temporary data is removed after successful import of model into AIAA Server

logs/

INTERNAL - AIAA Server logs are stored over here

models/

INTERNAL - All serving models in Frozen Format are stored here

triton_models/

INTERNAL - All serving Triton models are stored here unless user provides a different triton_model_path while starting AIAA Server

lib/

EXTERNAL - Put your custom transforms/inferences/inference pipelines here and use them in config_aiaa.json (Bring your own models)

samples/{model}/

EXTERNAL - Add your train samples here to trigger an incremental training for a model with new samples (Model Fine-Tune Setup)

mmars/{model}/

INTERNAL/EXTERNAL - Put your MMAR here to trigger an incremental training for a model with new samples. Also AIAA server archives any imported MMARs here

Note

We suggest you create a new folder to be the workspace so that if you want to delete these files you can just remove that workspace folder.


Logs

Once the server is up and running, you can watch or pull the server logs in the browser.

Tip

You don’t have to be in bash mode to see the logs files.


Monitor

To track API usage, profiling etc, developers can log in and access monitoring dashboard for the AIAA Server. To enable this functionality, please start the AIAA server with option --monitor true.

Tip

Login Username: admin; Default password: admin

Note

All the URLs with 0.0.0.0 or 127.0.0.1 is only accessible if your browser and server are on the same machine. The $LOCAL_PORT is the port you use when launching the container in Installation. You can use ip a to find your server’s IP address and check it remotely on http://[server_ip]:[port]/logs.


© Copyright 2020, NVIDIA. Last updated on Feb 2, 2023.