| Network | TensorRT | OpenVINO | OnnxRuntime | Translator Plugin | Minimum required EVA | Reference link |
|---|---|---|---|---|---|---|
| OpenPose | Tested | Tested | Tested | adtrans_openpose_py | 3.0.0+ |
lightweight-human-pose-estimation |
Original model : OpenPose
Convert PyTorch model to ONNX format: run script in terminal
python3 scripts/convert_to_onnx.py --checkpoint-path <CHECKPOINT>.
Reference README in the original model git repo
trtexec --onnx=human-pose-estimation-6-9-sim.onnx --buildOnly --saveEngine=YOUR_MODEL_NAME.engine --maxBatch=1
trtexec --onnx=human-pose-estimation-6-9-sim.onnx --buildOnly --saveEngine=YOUR_MODEL_NAME.engine --maxBatch=1 --fp16
Note:
Note: For TensorRT versions > 8.2, remove--maxBatch=1parameter when using ONNX models.To check TensorRT version:
dpkg -l | grep nvinfer(For rpm-based systems:
rpm -qa | grep nvinfer)Adjust your commands accordingly based on your TensorRT version.
python3 mo.py --input_model human-pose-estimation-6-9-sim.onnx --input data --mean_values data[128.0,128.0,128.0] --scale_values data[256] --output stage_1_output_0_pafs,stage_1_output_1_heatmaps --model_name MODEL_NAME --output_dir OUTPUT_DIR
python3 mo.py --input_model human-pose-estimation-6-9-sim.onnx --input data --mean_values data[128.0,128.0,128.0] --scale_values data[256] --output stage_1_output_0_pafs,stage_1_output_1_heatmaps --model_name MODEL_NAME --output_dir OUTPUT_DIR --data_type FP16
gst-launch-1.0 filesrc location=street.mp4 ! decodebin ! videoconvert ! adrt device=1 model=human-pose_FP32.engine scale=0.0039 mean="128 128 128" ! adtrans_openpose_py blob-size='(1,19,32,57),(1,38,32,57)' input-height=256 input-width=456 ! admetadrawer ! videoconvert ! ximagesink
Explanation of some plugins parameters :
adrt device=1 model=human-pose_FP32.engine scale=0.0039 mean="128 128 128"
adtrans_openpose_py blob-size='(1,19,32,57),(1,38,32,57)' input-height=256 input-width=456