Through this sample code, you can learn:
This sample shows how to retrieve the adlink metadata inside the element. The target created pipeline command is:
$ gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=0 id=187 class=boy prob=0.876 ! get_classification ! fakesink
This command is used to simulate inferenced classification result by admetadebuger and the element(this sample code), get_classification, retrieve the classification data from admetadata and then print out in terminal.
This sample shows how to retrieve the adlink metadata inside the element. First, import the gst_admeta:
import gst_admeta as admeta
Second, in the method, chainfunc, of this sample has an extra code block which is going to get the classification metadata:
# This function will get classification result from admetadata
class_results = admeta.get_classification(buff, 0)
# Iteratively retrieve the content of classification
with class_results as results:
if results is not None:
for r in results:
print('**********************')
print('classification result:')
print('id = ', r.index)
print('output = ', r.output.decode("utf-8").strip())
print('label = ', r.label.decode("utf-8").strip())
print('prob = {:.3f}'.format(r.prob))
else:
print("None")
The metadata structure could be find in Edge Vision Analytics SDK Programming Guide : How to Use ADLINK Metadata in Chapter 5. Or can be found in the files:
Based on the structure, AdBatch can get the frame in vector and the inference data are stored in each frame based on the inference type: classification, detection, segmentation or openpose. Here, the sample use classification to illustrate get the metadata and print out index, label, output and prob to the terminal or cmd. Note that the admetadebuger does not pad any fake "output" information into metadata, so the output value is null.
Same with the last process with video-filter.py, required to install into the EVA package. Install process please refer to vido filter in Python.
After installing the Python plugin get-classification.py file to the plugin folder, run the GStreamer tool to inspect it to see the metadata and the object information.
$ gst-inspect-1.0 get_classification
and you will see all of the information listed:
Factory Details:
Rank none (0)
Long-name Video Filter
Klass GstElement
Description Python based GStreamer videofilter example
Author Dr. Paul Lin <paul.lin@adlinktech.com>
Plugin Details:
Name python
Description loader for plugins written in python
Filename /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstpython.so
Version 1.14.5
License LGPL
Source module gst-python
Binary package GStreamer Python
Origin URL http://gstreamer.freedesktop.org
// more information omitted
Then run the pipeline command for testing:
$ gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=0 id=187 class=boy prob=0.876 ! get_classification ! fakesink
and you will see the message shows the inference result frame by frame in the terminal or cmd as bellow:
classification result:
id = 187
output =
label = boy
prob = 0.876
Note that the admetadebuger does not pad any fake "output" information into metadata, so the output value is null.