Through this sample code, you can learn:
This sample shows you how to retrieve the adlink metadata from the application. The target created pipeline command is:
gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=1 id=187 class=boy prob=0.876 x1=0.1 y1=0.2 x2=0.3 y2=0.4 ! appsink
The admetadebuger is the plugin that generated a fake inferenced object detection results to metadata. By hooking the callback function to appsink. User can get the image data like smaple codes, "Get stream data from pipeline", illustrated and get the results stored in metadata also.
This sample shows you how to retrieve the adlink metadata from the application. This sample is alike Get stream data from pipeline. First, include the gstadmeta.h:
#include "gstadmeta.h"
Second, Add the subfunction for retrieving the GstAdBatchMeta pointer from GstBuffer:
GstAdBatchMeta* gst_buffer_get_ad_batch_meta(GstBuffer* buffer)
{
gpointer state = NULL;
GstMeta* meta;
const GstMetaInfo* info = GST_AD_BATCH_META_INFO;
while ((meta = gst_buffer_iterate_meta (buffer, &state)))
{
if (meta->info->api == info->api)
{
GstAdMeta *admeta = (GstAdMeta *) meta;
if (admeta->type == AdBatchMeta)
return (GstAdBatchMeta*)meta;
}
}
return NULL;
}
Third, in the callback function, new_sample, of appsink has an extra code block which is going to get the detection metadata:
GstAdBatchMeta *meta = gst_buffer_get_ad_batch_meta(buffer);
if(meta != NULL)
{
AdBatch &batch = meta->batch;
VideoFrameData frame_info = batch.frames[0];
int detectionResultNumber = frame_info.detection_results.size();
cout << "detection result number = " << detectionResultNumber << endl;
for(int i = 0 ; i < detectionResultNumber ; ++i)
{
cout << "========== metadata in application ==========\n";
cout << "Class = " << frame_info.detection_results[i].obj_id << endl;
cout << "Label = " << frame_info.detection_results[i].obj_label << endl;
cout << "Prob = " << frame_info.detection_results[i].prob << endl;
cout << "(x1, y1, x2, y2) = ("
<< frame_info.detection_results[i].x1 << ", "
<< frame_info.detection_results[i].y1 << ", "
<< frame_info.detection_results[i].x2 << ", "
<< frame_info.detection_results[i].y2 << ")" << endl;
cout << "=============================================\n";
}
}
The metadata structure could be find in Edge Vision Analytics SDK Programming Guide : How to Use ADLINK Metadata in Chapter 5. Or can be found in the files:
Based on the structure, AdBatch can get the frame in vector and the inference data are stored in each frame based on the inference type: classification, detection, segmentation or openpose. Here, the sample use detection to illustrate get the metadata and print out obj_id, obj_label, prob and the box coordination to the terminal or cmd.
Go to the folder of the binary and run binary in terminal or cmd:
$ ./getAdMetadata-object-detection
and you will see the message shows the inference result frame by frame in the terminal or cmd as bellow:
detection result number = 1
========== metadata in application ==========
Class = 187
Label = boy
Prob = 0.876
(x1, y1, x2, y2) = (0.1, 0.2, 0.3, 0.4)
=============================================