| Network | TensorRT | OpenVINO | OnnxRuntime | Translator Plugin | Minimum required EVA | Reference link |
|---|---|---|---|---|---|---|
| PoseNet | Tested | Tested | Tested | adtrans_posenet | 3.8.1+ |
deepstream_pose_estimation |
Original model : PoseNet
Reference README in the original model git repo to convert your trained pytorch model to onnx
trtexec --onnx=pose_estimation.onnx --buildOnly --saveEngine=YOUR_MODEL_NAME.engine --maxBatch=1
trtexec --onnx=pose_estimation.onnx --buildOnly --saveEngine=YOUR_MODEL_NAME.engine --maxBatch=1 --fp16
Note:
Note: For TensorRT versions > 8.2, remove--maxBatch=1parameter when using ONNX models.To check TensorRT version:
dpkg -l | grep nvinfer(For rpm-based systems:
rpm -qa | grep nvinfer)Adjust your commands accordingly based on your TensorRT version.
python3 mo.py --input_model pose_estimation.onnx --model_name MODEL_NAME --output_dir OUTPUT_DIR
python3 mo.py --input_model pose_estimation.onnx --model_name MODEL_NAME --output_dir OUTPUT_DIR --data_type FP16
gst-launch-1.0 filesrc location=street.mp4 ! decodebin ! videoconvert ! adrt model=pose_estimation.engine scale=0.0039 rgbconv=True ! adtrans_posenet threshold=0.2 save-body-parts=true ! admetadrawer showlabel=false ! videoconvert ! ximagesink
Explanation of some plugins parameters :
adrt model=pose_estimation.engine scale=0.0039 rgbconv=True
adtrans_posenet threshold=0.2 save-body-parts=true