Getting Started
Once we install and start the container following Installation.
The simple command to start AI-Assisted Annotation server inside the container is: start_aas.sh
.
Usually you want to preserve your data and log by launching with workspace, for example:
# Run with workspace
# Remember we mount local disk to this /aiaa-experiments earlier
start_aas.sh --workspace /aiaa-experiments/aiaa-1
Following are the options available while starting AI-Assisted Annotation Server.
Option |
Default Value |
Example |
Description |
---|---|---|---|
BASIC |
|||
workspace |
/var/nvidia/aiaa |
|
Workspace Location for AIAA Server to save all the configurations, logs and models. Hence always recommended to to use shared workspace while starting AIAA |
debug |
false |
|
Enable Debug level Logging for AIAA Server logs |
auto_reload |
false |
|
AIAA Server will auto-reload models whenever the configuration is updated externally. This option should be enabled, when multiple AIAA Server are installed and sharing a common workspace |
admin_hosts |
* |
|
Restrict client hosts who can send /admin requests to AIAA server (for e.g. managing models). * (star) will allow any; otherwise provide a list of client IP addresses/prefixes to allow |
SSL Support |
|||
ssl |
false |
|
Run AIAA server in ssl mode |
ssl_cert_file |
|
SSL Certificate File Path |
|
ssl_pkey_file |
|
SSL Key File Path |
|
Native/Triton |
|||
engine |
TRITON |
|
Use this option to run AIAA server in native mode (AIAA) or with Triton as background inference engine |
triton_ip |
localhost |
|
If anything other than 127.0.0.1 or localhost is specified, AIAA will not start a local Triton server for inference. Otherwise AIAA will start a local instance of Triton server |
triton_port |
8000 |
|
Triton server port (http: 8000, grpc: 8001; monitor: 8002) |
triton_proto |
http |
Triton server protocol (http or grpc) |
|
triton_shmem |
no |
|
Whether to use shared memory communication between AIAA and Triton (no, system, or cuda) |
triton_model_path |
<workspace dir>/triton_models |
Triton models path |
|
triton_start_timeout |
120 |
Wait time in seconds for AIAA to make sure Triton server is up and running |
|
triton_model_timeout |
30 |
Wait time in seconds for AIAA to make sure model is loaded and up for serving in Triton |
|
FineTune |
|||
fine_tune |
false |
|
If set to true, will fine tune the models in AIAA automatically based on samples directory (Model Fine-Tune Setup) |
fine_tune_hour |
0 |
|
The scheduled time (0~23) in each day to fine tune all the models |
OTHERS |
|||
monitor |
false |
|
If API instrumentation is allowed. If enabled, you can view monitor dashboard by visiting http://host:port/dashboard |
client_id |
host-80 |
|
If you are deploying AIAA Server on multi node/AWS with shared disk, you might need this to identify each AIAA server accordingly |
use_cupy |
false |
|
Use CUPY for performance boost when you have large GPU memory and more than 1 GPU |
For PyTorch support please check PyTorch Support in AIAA.
Native mode means that AIAA does not run Triton inference server as backend. It uses native TensorFlow or PyTorch to run the inference directly.
Native mode is supported for compatibility and it also provides flexibility for users to write their own inference procedure.
Stop AIAA
The simple command to stop AI-Assisted Annotation server inside the container is: stop_aas.sh
Workspace
AIAA Server uses workspace directory (if not specified default shall be insider docker at the path: /var/nvidia/aiaa) for saving configs, models, logs etc. You can shutdown and run the docker multiple times, if you mount an external workspace path while running the docker. See the advanced options while running the docker.
Following are few files/folders explained from workspace directory. Note that INTERNAL
means that file or
directory is managed by AIAA server and we suggest not to change it.
File/Folder |
Description |
---|---|
aiaa_server_config.json |
|
aiaa_server_dashboard.db |
|
downloads/ |
|
logs/ |
|
models/ |
|
triton_models/ |
|
lib/ |
|
samples/{model}/ |
|
mmars/{model}/ |
|
We suggest you create a new folder to be the workspace so that if you want to delete these files you can just remove that workspace folder.
Logs
Once the server is up and running, you can watch or pull the server logs in the browser.
http://127.0.0.1:$LOCAL_PORT/logs (server will fetch last 100 lines from the current log file)
http://127.0.0.1:$LOCAL_PORT/logs?lines=-1 to fetch everything from the current log file.
You don’t have to be in bash mode to see the logs files.
Monitor
To track API usage, profiling etc, developers can log in and access monitoring dashboard for the AIAA Server.
To enable this functionality, please start the AIAA server with option --monitor true
.
Login Username: admin; Default password: admin
All the URLs with 0.0.0.0 or 127.0.0.1 is only accessible if your browser and server are on the same machine.
The $LOCAL_PORT
is the port you use when launching the container in Installation.
You can use ip a
to find your server’s IP address and check it remotely on http://[server_ip]:[port]/logs.