Python CLI Client#
Pre-requisites#
A reference Python CLI client is provided along with VSS. The client internally calls the REST APIs exposed by VSS.
The client is added to the VSS container image at /opt/nvidia/via/via_client_cli.py
.
The Python package dependencies for the CLI client can be installed using:
pip3 install tabulate tqdm sseclient-py requests fastapi uvicorn
The CLI client can be executed by running:
python3 via_client_cli.py <command> <args> [--print-curl-command]
By default, the client assumes that the VSS API server is running at
http://localhost:8000. This can be configured by exporting the environment
variable VIA_BACKEND=<VIA_API_URL>
or passing the argument
--backend <VIA_API_URL>
.
The CLI client also provides an option to print the curl command for any
operation. This can be done by passing the --print-curl-command
argument
to the client.
To get a list of all supported commands and options supported by each command run:
python3 via_client_cli.py -h
python3 via_client_cli.py <command> -h
Files Commands#
The following section describes each of the commands in detail.
Add File#
Calls POST /files
internally. Uploads or adds a file as path. Prints the file id and other details.
Reference:
via_client_cli.py add-file [-h] [--add-as-path] [--is-image]
[--backend BACKEND] [--print-curl-command] file
Example for uploading a file:
Note
File types supported: mp4, mkv - with h264/h265 videos. Images: jpg, png.
python3 via_client_cli.py add-file video.mp4
Example for adding a file as path (This requires the file path to be accessible inside the container):
python3 via_client_cli.py add-file --add-as-path /media/video.mp4
Example for uploading an image file:
python3 via_client_cli.py add-file image.jpg --is-image
List Files#
Calls GET /files
internally. Prints the list of files added to the server and their details in a tabular format.
Reference:
via_client_cli.py list-files [-h] [--backend BACKEND]
[--print-curl-command]
Example:
python3 via_client_cli.py list-files
Get File Details#
Calls GET /files/{id}
internally. Prints the details of the file.
Reference:
via_client_cli.py file-info [-h] [--backend BACKEND]
[--print-curl-command] file_id
Example:
python3 via_client_cli.py file-info
7ce1127a-2009-4bfa-bdf8-efa9e1f37fa4
Get File Contents#
Calls GET /files/{id}/content
internally. Saves the content to a new file.
Reference:
via_client_cli.py file-content [-h] [--backend BACKEND]
[--print-curl-command] file_id
Example:
python3 via_client_cli.py file-content
7ce1127a-2009-4bfa-bdf8-efa9e1f37fa4
Delete File#
Calls DELETE /files/{id}
internally. Prints the delete status and file details.
Reference:
via_client_cli.py delete-file [-h] [--backend BACKEND]
[--print-curl-command] file_id
Example:
python3 via_client_cli.py delete-file
7ce1127a-2009-4bfa-bdf8-efa9e1f37fa4
Live Stream Commands#
Add Live Stream#
Calls POST /live-stream
internally. Prints the live-stream id if it is added successfully.
Reference:
via_client_cli.py add-live-stream [-h] [--description DESCRIPTION]
[--username USERNAME] [--password PASSWORD] [--backend BACKEND]
[--print-curl-command]
live_stream_url
Example:
python3 via_client_cli.py add-live-stream --description "Some live stream description" \
rtsp://192.168.1.100:8554/video/media1
List Live Streams#
Calls GET /live-stream
internally. Prints the list of live-streams and their details in a tabular format.
Reference:
via_client_cli.py list-live-streams [-h] [--backend BACKEND]
[--print-curl-command]
Example:
python3 via_client_cli.py list-live-streams
Delete Live Stream#
Calls DELETE /live-stream/{id}
internally. Prints the status confirming deletion of the live stream.
Reference:
via_client_cli.py delete-live-stream [-h] [--backend BACKEND][--print-curl-command]video_id
Example:
python3 via_client_cli.py delete-live-stream
ea071500-3a47-4e6f-87da-1bc796075344
Models Commands#
List Models#
Calls GET /models
internally. Prints the list of models loaded by the server and their details in a tabular format.
Reference:
via_client_cli.py list-models [-h] [--backend BACKEND][--print-curl-command]
Example:
python3 via_client_cli.py list-models
Summarization Command#
Calls POST /summarize
internally. Triggers summarization on a file or
live-stream and blocks it until summarization is complete or you
interrupt the process.
The command allows some configurable parameters with the summarize request.
For files, results are available after the entire file is summarized. The command then prints the results.
For live-streams, results are periodically available. The period depends on the chunk_duration and summary_duration. Interrupting the summarize command does not stop the summarization on the server side. You can re-connect to the live-stream by re-running the summarize command with the same id as the live-stream.
Multiple image files can be summarized together by specifying --id <image-file-id>
multiple times. This works only for image files.
To enable later chat or Q&A based on the prompts in the current summarize API
call, add --enable-chat
.
To enable alerts for live-streams and files using Server-Sent Events, specify the events
to be alerted for using --alert <alert_name>:<event1>,<event2>,...
. For
example --alert safety-issues:workers not wearing ppe,boxes falling
. This
argument can be specified multiple times for multiple alerts.
For more details on each argument, see the help command and the VSS
API reference. Detailed VSS API documentation is available at http://<VSS_API_ENDPOINT>/docs
after VSS is deployed.
Reference:
via_client_cli.py summarize [-h] --id ID --model MODEL
[--stream]
[--chunk-duration CHUNK_DURATION]
[--chunk-overlap-duration CHUNK_OVERLAP_DURATION]
[--summary-duration SUMMARY_DURATION]
[--prompt PROMPT]
[--caption-summarization-prompt CAPTION_SUMMARIZATION_PROMPT]
[--summary-aggregation-prompt SUMMARY_AGGREGATION_PROMPT]
[--file-start-offset FILE_START_OFFSET]
[--file-end-offset FILE_END_OFFSET]
[--model-temperature MODEL_TEMPERATURE]
[--model-top-p MODEL_TOP_P]
[--model-top-k MODEL_TOP_K]
[--model-max-tokens MODEL_MAX_TOKENS]
[--model-seed MODEL_SEED]
[--alert ALERT]
[--enable-chat]
[--response-format {json_object,text}]
[--backend BACKEND]
[--print-curl-command]
Example:
python3 via_client_cli.py summarize \
--id ea071500-3a47-4e6f-87da-1bc796075344 \
--model gpt-4o \
--chunk-duration 60 \
--stream \
--prompt "Write a dense caption about the video containing events like ..." \
--model-temperature 0.8
python3 via_client_cli.py summarize \
--id ea071500-3a47-4e6f-87da-1bc796075344 \
--model gpt-4o \
--chunk-duration 10
Chat Command#
Calls POST /chat/completions
internally. Triggers a Q&A query on a file or
live-stream and blocks it until results are received.
The command allows sending the prompt(question) along with some configurable parameters for the Q&A request.
For chat command to work, a summarize query with --enable-chat
must already
have completed,
For more details on each argument, see the help command and the VSS
API reference. Detailed VSS API documentation is available at http://<VSS_API_ENDPOINT>/docs
after VSS is deployed.
Reference:
via_client_cli.py chat [-h] --id ID --model MODEL
[--stream]
[--prompt PROMPT]
[--file-start-offset FILE_START_OFFSET]
[--file-end-offset FILE_END_OFFSET]
[--model-temperature MODEL_TEMPERATURE]
[--model-top-p MODEL_TOP_P]
[--model-top-k MODEL_TOP_K]
[--model-max-tokens MODEL_MAX_TOKENS]
[--model-seed MODEL_SEED]
[--response-format {json_object,text}]
[--backend BACKEND] [--print-curl-command]
Example:
python3 via_client_cli.py chat \
--id d0b997f6-869a-4b2b-aee4-ea92d204d6bf \
--model vila-1.5 \
--prompt "Is there a person wearing white shirt?"
Alerts Commands#
Add Live Stream Alert#
Call POST /alerts
internally. Adds an alert callback for a live-stream based
on currently running summarization prompts. Prints the alert id if added
successfully.
Whenever an alert is detected, VSS will POST
a request to the configured
URL with alert details.
Reference:
via_client_cli.py add-alert [-h] --live-stream-id LIVE_STREAM_ID
--callback-url CALLBACK_URL --events EVENTS
[--callback-json-template CALLBACK_JSON_TEMPLATE]
[--callback-token CALLBACK_TOKEN]
[--backend BACKEND] [--print-curl-command]
Example:
python3 via_client_cli.py add-alert \
--live-stream-id ea071500-3a47-4e6f-87da-1bc796075344 \
--callback-url http://localhost:14000/via-alert-callback \
--events "worker not wearing ppe" --events "boxes falling"
Refer to Alert Callback Test Server for starting a test alert callback server.
List Live Stream Alerts#
Call GET /alerts
internally. Get the list of all live-stream alerts that
have been configured.
Reference:
via_client_cli.py list-alerts [-h] [--backend BACKEND] [--print-curl-command]
Example:
python3 via_client_cli.py list-alerts
Delete Live Stream Alert#
Call DELETE /alerts/{id}
internally. Delete a live-stream alert using its ID.
Reference:
via_client_cli.py delete-alert [-h] [--backend BACKEND] [--print-curl-command] alert_id
Example:
python3 via_client_cli.py delete-alert f82b2c8b-e5ce-49d7-be7e-053717d0cd49
Alert Callback Test Server#
Start a test server for receiving live-stream alerts from VSS. The server prints the alerts as received.
Reference:
via_client_cli.py alert-callback-server [-h] [--host HOST] [--port PORT]
Example:
python3 via_client_cli.py alert-callback-server --host 0.0.0.0 --port 5004
Server Health and Metrics Commands#
Server Health Check#
Calls GET /health/ready
internally. Checks the response status code
and prints the server health status.
Reference:
via_client_cli.py server-health-check [-h] [--backend BACKEND][--print-curl-command]
Example:
python3 via_client_cli.py server-health-check
Server Metrics#
Calls GET /metrics
internally. Prints the server metrics. The metrics
are in Prometheus format.
Reference:
via_client_cli.py server-metrics [-h] [--backend BACKEND][--print-curl-command]
Example:
python3 via_client_cli.py server-metrics