Through this sample code, you can learn:
This sample shows you how to retrieve the adlink metadata from the application. The target created pipeline command is:
gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=0 id=187 class=boy prob=0.876 ! appsink
The admetadebuger is the plugin that generated a fake inferenced classification results to metadata. By hooking the callback function to appsink. User can get the image data like smaple codes, "Get stream data from pipeline", illustrated and get the results stored in metadata also.
This sample shows you how to retrieve the adlink metadata from the application. This sample is alike Get stream data from pipeline. First, include the gstadmeta.h:
#include "gstadmeta.h"
Second, Add the subfunction for retrieving the GstAdBatchMeta pointer from GstBuffer:
GstAdBatchMeta* gst_buffer_get_ad_batch_meta(GstBuffer* buffer)
{
gpointer state = NULL;
GstMeta* meta;
const GstMetaInfo* info = GST_AD_BATCH_META_INFO;
while ((meta = gst_buffer_iterate_meta (buffer, &state)))
{
if (meta->info->api == info->api)
{
GstAdMeta *admeta = (GstAdMeta *) meta;
if (admeta->type == AdBatchMeta)
return (GstAdBatchMeta*)meta;
}
}
return NULL;
}
Third, in the callback function, new_sample, of appsink has an extra code block which is going to get the classification metadata:
GstAdBatchMeta *meta = gst_buffer_get_ad_batch_meta(buffer);
if(meta != NULL)
{
AdBatch &batch = meta->batch;
VideoFrameData frame_info = batch.frames[0];
int classificationResultNumber = frame_info.class_results.size();
cout << "classification result number = " << classificationResultNumber << endl;
for(int i = 0 ; i < classificationResultNumber ; ++i)
{
cout << "========== metadata in application ==========\n";
cout << "Class = " << frame_info.class_results[i].index << endl;
cout << "Label = " << frame_info.class_results[i].label << endl;
cout << "output = " << frame_info.class_results[i].output << endl;
cout << "Prob = " << frame_info.class_results[i].prob << endl;
}
}
The metadata structure could be find in Edge Vision Analytics SDK Programming Guide : How to Use ADLINK Metadata in Chapter 5. Or can be found in the files:
Based on the structure, AdBatch can get the frame in vector and the inference data are stored in each frame based on the inference type: classification, detection, segmentation or openpose. Here, the sample use classification to illustrate get the metadata and print out index, label, output and prob to the terminal or cmd. Note that the admetadebuger does not pad any fake "output" information into metadata, so the output value is null.
Go to the folder of the binary and run binary in terminal or cmd:
$ ./getAdMetadata-classification
and you will see the message shows the inference result frame by frame in the terminal or cmd as bellow:
classification result number = 1
========== metadata in application ==========
Class = 187
Label = boy
output =
Prob = 0.876
=============================================
Note that the admetadebuger does not pad any fake "output" information into metadata, so the output value is null.