PointPillars with TAO Deploy#
PointPillars .onnx`
file generated from model export is taken as an input to deploy to generate
optimized TensorRT engine.
Converting an .onnx File into TensorRT Engine#
Use the following command to run PointPillars engine generation:
We first need to set the base_experiment.
FILTER_PARAMS='{"network_arch": "pointpillars"}'
$BASE_EXPERIMENTS=$(tao-client pointpillars list-base-experiments --filter_params "$FILTER_PARAMS")
Retrieve the PTM_ID from $BASE_EXPERIMENTS before setting base_experiment.
PTM_INFORMATION="{\"base_experiment\": [$PTM_ID]}"
tao-client pointpillars patch-artifact-metadata --id $EXPERIMENT_ID --job_type experiment --update_info $PTM_INFORMATION
Required Arguments
--id
: The unique identifier of the experiment from which to train the model
See also
For information on how to create an experiment using the FTMS client, refer to the Creating an experiment section in the Remote Client documentation.
Then retrieve the specifications.
GEN_TRT_ENGINE_SPECS=$(tao-client pointpillars get-spec --action gen_trt_engine --job_type experiment --id $EXPERIMENT_ID --parent_job_id EXPORT_JOB_ID)
Get specifications from $GEN_TRT_ENGINE_SPECS. You can override values as needed.
GEN_TRT_ENGINE_JOB_ID=$(tao-client pointpillars experiment-run-action --action gen_trt_engine --id $EXPERIMENT_ID --specs "$GEN_TRT_ENGINE_SPECS")
Same spec file can be used as the tao model pointpillars export
command.
tao deploy pointpillars gen_trt_engine [-h]
-e EXPERIMENT_SPEC
Required Arguments
-e, --experiment_spec
: The experiment spec file to set up the TensorRT engine generation. This should be the same as the export specification file.
Optional Arguments
-h, --help
: Show this help message and exit.
Sample Usage
Here’s an example of using the gen_trt_engine
command to generate FP16 TensorRT engine:
tao deploy pointpillars gen_trt_engine -e gen_trt_engine.yaml
Running Evaluation through TensorRT Engine#
EVAL_SPECS=$(tao-client pointpillars get-spec --action evaluate --job_type experiment --id $EXPERIMENT_ID)
EVAL_JOB_ID=$(tao-client pointpillars experiment-run-action --action evaluate --job_type experiment --id $EXPERIMENT_ID --parent_job_id GEN_TRT_ENGINE_JOB_ID)
Same spec file as TAO evaluation spec file. Sample spec file:
dataset:
class_names: ['Car', 'Pedestrian', 'Cyclist']
type: 'GeneralPCDataset'
data_path: '/media/data/zhimengf/tao-experiments/data/pointpillars'
data_split: {
'train': train,
'test': val
}
data_info_path: "/media/data/zhimengf/tao-experiments/pointpillars/data_info"
info_path: {
'train': [infos_train.pkl],
'test': [infos_val.pkl],
}
balanced_resampling: False
point_feature_encoding: {
encoding_type: absolute_coordinates_encoding,
used_feature_list: ['x', 'y', 'z', 'intensity'],
src_feature_list: ['x', 'y', 'z', 'intensity'],
}
point_cloud_range: [0, -39.68, -3, 69.12, 39.68, 1]
data_augmentor:
disable_aug_list: ['placeholder']
aug_config_list:
- name: gt_sampling
db_info_path:
- dbinfos_train.pkl
preface: {
filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
}
sample_groups: ['Car:15','Pedestrian:15', 'Cyclist:15']
num_point_features: 4
disable_with_fake_lidar: False
remove_extra_width: [0.0, 0.0, 0.0]
limit_whole_scene: False
- name: random_world_flip
along_axis_list: ['x']
- name: random_world_rotation
world_rot_angle: [-0.78539816, 0.78539816]
- name: random_world_scaling
world_scale_range: [0.95, 1.05]
data_processor:
- name: mask_points_and_boxes_outside_range
remove_outside_boxes: True
num_workers: 4
model:
post_processing:
recall_thresh_list: [0.3, 0.5, 0.7]
score_thresh: 0.1
output_raw_score: False
eval_metric: kitti
nms_config:
multi_classes_nms: False
nms_type: nms_gpu
nms_thresh: 0.01
nms_pre_max_size: 4096
nms_post_max_size: 500
evaluate:
batch_size: 1
trt_engine: "/media/data/zhimengf/tao-experiments/pointpillars/retrain/checkpoint_epoch_80.onnx.fp16"
results_dir: "/data/zhimengf/tao-experiments/tao-deploy/evaluate"
Use the following command to run PointPillars engine evaluation:
tao deploy pointpillars evaluate [-h]
-e EXPERIMENT_SPEC
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.
Sample Usage
Here’s an example of using the evaluate
command to run evaluation with the TensorRT engine:
tao deploy pointpillars evaluate -e evaluate.yaml
Running Inference through TensorRT Engine#
INFER_SPECS=$(tao-client pointpillars get-spec --action inference --job_type experiment --id $EXPERIMENT_ID)
INFER_JOB_ID=$(tao-client pointpillars experiment-run-action --action infererence --id $EXPERIMENT_ID --specs "INFER_SPECS" --parent_job_id $GEN_TRT_ENGINE_JOB_ID)
tao deploy pointpillars inference [-h]
-e EXPERIMENT_SPEC
Required Arguments
-e, --experiment_spec
: The experiment spec file for evaluation. This should be the same as the tao evaluate specification file.
Sample Usage
Here’s an example of using the inference
command to run inference with the TensorRT engine:
tao deploy pointpillars inference -e inference.yaml
The visualization will be stored under results_dir
in the yaml spec.