Through this sample code, you can learn:
This sample shows you how to retrieve the adlink metadata from the application. The target created pipeline command is:
gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! videoconvert ! admetadebuger type=1 id=187 class=boy prob=0.876 x1=0.1 y1=0.2 x2=0.3 y2=0.4 ! appsink
gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! videoconvert ! admetadebuger type=1 id=187 class=boy prob=0.876 x1=0.1 y1=0.2 x2=0.3 y2=0.4 ! appsink
The admetadebuger is the plugin that generated a fake inferenced object detection results to metadata. By hooking the callback function to appsink. User can get the image data like smaple codes, "Get stream data from pipeline", illustrated and get the results stored in metadata also.
This sample shows you how to retrieve the adlink metadata from the application. This sample is alike Get stream data from pipeline. First, import the gst_admeta.py:
# Required to import to get ADLINK inference metadata
import gst_admeta as admeta
Second, in the callback function, new_sample, of appsink has an extra code block which is going to get the classification metadata:
def new_sample(sink, data) -> Gst.FlowReturn:
# code omitted ...
# get detection inference result
buf = sample.get_buffer()
boxes = admeta.get_detection_box(buf,0)
with boxes as det_box :
if det_box is not None :
for box in det_box:
print('Detection result: prob={:.3f}, coordinate=({:.2f},{:.2f}) to ({:.2f},{:.2f})), Index = {}, Label = {}'.format(box.prob,box.x1,box.y1,box.x2, box.y2, box.obj_id, box.obj_label.decode("utf-8").strip()))
else:
print("None")
# code omitted ...
...
print('Detection result: prob={:.3f}, coordinate=({:.2f},{:.2f}) to ({:.2f},{:.2f})), Index = {}, Label = {}'.format(box.prob,box.x1,box.y1,box.x2, box.y2, box.obj_id, box.obj_label.decode("utf-8").strip()))
...
The metadata structure could be find in Edge Vision Analytics SDK Programming Guide : How to Use ADLINK Metadata in Chapter 5. Or can be found in the file:
Based on the structure, admeta can get the frame in vector and the inference data are stored in each frame based on the inference type: classification, detection, segmentation or openpose. Here, the sample use detection, get_detection_box, to illustrate get the metadata and print out obj_id, obj_label, prob and the box coordination to the terminal or cmd.
Go to the folder of the binary and run binary in terminal or cmd:
$ python3 getAdMetadata-object-detection.py
and you will see the message shows the inference result frame by frame in the terminal or cmd as bellow:
Detection result: prob=0.876, coordinate=(0.10,0.20) to (0.30,0.40)), Index = 187, Label = boy