PointPillars with TAO Deploy#

PointPillars .onnx` file generated from model export is taken as an input to deploy to generate optimized TensorRT engine.

Converting an .onnx File into TensorRT Engine#

Use the following command to run PointPillars engine generation:

We first need to set the base_experiment.

FILTER_PARAMS='{"network_arch": "pointpillars"}'

$BASE_EXPERIMENTS=$(tao-client pointpillars list-base-experiments --filter_params "$FILTER_PARAMS")

Retrieve the PTM_ID from $BASE_EXPERIMENTS before setting base_experiment.

PTM_INFORMATION="{\"base_experiment\": [$PTM_ID]}"

tao-client pointpillars patch-artifact-metadata --id $EXPERIMENT_ID --job_type experiment --update_info $PTM_INFORMATION

Required Arguments

  • --id: The unique identifier of the experiment from which to train the model

See also

For information on how to create an experiment using the FTMS client, refer to the Creating an experiment section in the Remote Client documentation.

Then retrieve the specifications.

GEN_TRT_ENGINE_SPECS=$(tao-client pointpillars get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID --parent_job_id EXPORT_JOB_ID)

Get specifications from $GEN_TRT_ENGINE_SPECS. You can override values as needed.

GEN_TRT_ENGINE_JOB_ID=$(tao-client pointpillars experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$GEN_TRT_ENGINE_SPECS")

Running Evaluation through TensorRT Engine#

EVAL_SPECS=$(tao-client pointpillars get-spec --action evaluate --job_type experiment --id $EXPERIMENT_ID)

EVAL_JOB_ID=$(tao-client pointpillars experiment-run-action --action evaluate --job_type experiment --id $EXPERIMENT_ID --parent_job_id GEN_TRT_ENGINE_JOB_ID)

Running Inference through TensorRT Engine#

INFER_SPECS=$(tao-client pointpillars get-spec --action inference --job_type experiment --id $EXPERIMENT_ID)

INFER_JOB_ID=$(tao-client pointpillars experiment-run-action --action infererence --id $EXPERIMENT_ID --specs "INFER_SPECS" --parent_job_id $GEN_TRT_ENGINE_JOB_ID)