PointPillars with TAO Deploy#
PointPillars .onnx` file generated from model export is taken as an input to deploy to generate
optimized TensorRT engine.
Converting an .onnx File into TensorRT Engine#
Use the following command to run PointPillars engine generation:
We first need to get the base experiment (pretrained model).
BASE_EXPERIMENT_ID=$(tao pointpillars list-base-experiments \
--filter-param network_arch=pointpillars --output json | jq -r '.[0].id')
See also
For information on how to create an experiment using the FTMS client, refer to the Creating an experiment section in the Remote Client documentation.
Then retrieve the specifications and save to a file for editing.
tao pointpillars get-job-schema --action gen_trt_engine --output @gen_trt_engine_spec.yaml
Edit gen_trt_engine_spec.yaml as needed. Then create the job:
GEN_TRT_ENGINE_JOB_ID=$(tao pointpillars create-job \
--kind experiment \
--name "pointpillars_gen_trt_engine" \
--action gen_trt_engine \
--workspace-id $WORKSPACE_ID \
--parent-job-id $EXPORT_JOB_ID \
--specs @gen_trt_engine_spec.yaml \
--output json | jq -r '.id')
Same specification file can be used as the tao model pointpillars export command.
tao deploy pointpillars gen_trt_engine [-h]
-e EXPERIMENT_SPEC
Required Arguments
-e, --experiment_spec: The experiment specification file to set up the TensorRT engine generation. This should be the same as the export specification file.
Optional Arguments
-h, --help: Show this help message and exit.
Sample Usage
Here’s an example of using the gen_trt_engine command to generate FP16 TensorRT engine:
tao deploy pointpillars gen_trt_engine -e gen_trt_engine.yaml
Running Evaluation through TensorRT Engine#
EVAL_SPECS=$(tao pointpillars get-spec --action evaluate --job_type experiment --id $EXPERIMENT_ID)
EVAL_JOB_ID=$(tao pointpillars create-job --kind experiment --action evaluate --job_type experiment --id $EXPERIMENT_ID --parent_job_id GEN_TRT_ENGINE_JOB_ID)
Same specification file as TAO evaluation specification file. Sample specification file:
dataset:
class_names: ['Car', 'Pedestrian', 'Cyclist']
type: 'GeneralPCDataset'
data_path: '/media/data/zhimengf/tao-experiments/data/pointpillars'
data_split: {
'train': train,
'test': val
}
data_info_path: "/media/data/zhimengf/tao-experiments/pointpillars/data_info"
info_path: {
'train': [infos_train.pkl],
'test': [infos_val.pkl],
}
balanced_resampling: False
point_feature_encoding: {
encoding_type: absolute_coordinates_encoding,
used_feature_list: ['x', 'y', 'z', 'intensity'],
src_feature_list: ['x', 'y', 'z', 'intensity'],
}
point_cloud_range: [0, -39.68, -3, 69.12, 39.68, 1]
data_augmentor:
disable_aug_list: ['placeholder']
aug_config_list:
- name: gt_sampling
db_info_path:
- dbinfos_train.pkl
preface: {
filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
}
sample_groups: ['Car:15','Pedestrian:15', 'Cyclist:15']
num_point_features: 4
disable_with_fake_lidar: False
remove_extra_width: [0.0, 0.0, 0.0]
limit_whole_scene: False
- name: random_world_flip
along_axis_list: ['x']
- name: random_world_rotation
world_rot_angle: [-0.78539816, 0.78539816]
- name: random_world_scaling
world_scale_range: [0.95, 1.05]
data_processor:
- name: mask_points_and_boxes_outside_range
remove_outside_boxes: True
num_workers: 4
model:
post_processing:
recall_thresh_list: [0.3, 0.5, 0.7]
score_thresh: 0.1
output_raw_score: False
eval_metric: kitti
nms_config:
multi_classes_nms: False
nms_type: nms_gpu
nms_thresh: 0.01
nms_pre_max_size: 4096
nms_post_max_size: 500
evaluate:
batch_size: 1
trt_engine: "/media/data/zhimengf/tao-experiments/pointpillars/retrain/checkpoint_epoch_80.onnx.fp16"
results_dir: "/data/zhimengf/tao-experiments/tao-deploy/evaluate"
Use the following command to run PointPillars engine evaluation:
tao deploy pointpillars evaluate [-h]
-e EXPERIMENT_SPEC
Required Arguments
-e, --experiment_spec: The experiment specification file for evaluation. This should be the same as the tao evaluate specification file.
Sample Usage
Here’s an example of using the evaluate command to run evaluation with the TensorRT engine:
tao deploy pointpillars evaluate -e evaluate.yaml
Running Inference through TensorRT Engine#
INFER_SPECS=$(tao pointpillars get-spec --action inference --job_type experiment --id $EXPERIMENT_ID)
INFER_JOB_ID=$(tao pointpillars create-job --kind experiment --action infererence --id $EXPERIMENT_ID --specs "INFER_SPECS" --parent_job_id $GEN_TRT_ENGINE_JOB_ID)
tao deploy pointpillars inference [-h]
-e EXPERIMENT_SPEC
Required Arguments
-e, --experiment_spec: The experiment specification file for evaluation. This should be the same as the tao evaluate specification file.
Sample Usage
Here’s an example of using the inference command to run inference with the TensorRT engine:
tao deploy pointpillars inference -e inference.yaml
The visualization will be stored under results_dir in the yaml spec.