Through this sample code, you can learn:
This sample shows how to retrieve the adlink metadata inside the element. The target created pipeline command is:
$ gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=0 id=187 class=boy prob=0.876 ! adgetclassification ! fakesink
This command is used to simulate inferenced classification result by admetadebuger and the element(this sample code), adgetclassification, retrieve the classification data from admetadata and then print out in terminal.
This sample shows how to retrieve the adlink metadata inside the element. First, include the gstadmeta.h:
#include "gstadmeta.h"
Second, Add the subfunction for retrieving the GstAdBatchMeta pointer from GstBuffer:
GstAdBatchMeta* gst_buffer_get_ad_batch_meta(GstBuffer* buffer)
{
gpointer state = NULL;
GstMeta* meta;
const GstMetaInfo* info = GST_AD_BATCH_META_INFO;
while ((meta = gst_buffer_iterate_meta (buffer, &state)))
{
if (meta->info->api == info->api)
{
GstAdMeta *admeta = (GstAdMeta *) meta;
if (admeta->type == AdBatchMeta)
return (GstAdBatchMeta*)meta;
}
}
return NULL;
}
Third, in the virtual method, ad_get_classification_transform_frame_ip, of this sample has an extra code block, getClassificationData, which is going to get the classification metadata:
GstAdBatchMeta *meta = gst_buffer_get_ad_batch_meta(buffer);
if (meta == NULL)
GST_MESSAGE("Adlink metadata is not exist!");
else
{
AdBatch &batch = meta->batch;
bool frame_exist = batch.frames.size() > 0 ? true : false;
if(frame_exist)
{
VideoFrameData frame_info = batch.frames[0];
int classificationResultNumber = frame_info.class_results.size();
std::cout << "there are " << classificationResultNumber << " results." << std::endl;
for(int i = 0 ; i < classificationResultNumber ; ++i)
{
std::cout << "*********** classification result #" << (i+1) << std::endl;
adlink::ai::ClassificationResult classification_result = frame_info.class_results[i];
std::cout << "index = " << classification_result.index << std::endl;
std::cout << "output = " << classification_result.output << std::endl;
std::cout << "label = " << classification_result.label << std::endl;
std::cout << "prob = " << classification_result.prob << std::endl;
}
}
}
The metadata structure could be find in Edge Vision Analytics SDK Programming Guide : How to Use ADLINK Metadata in Chapter 5. Or can be found in the files:
Based on the structure, AdBatch can get the frame in vector and the inference data are stored in each frame based on the inference type: classification, detection, segmentation or openpose. Here, the sample use classification to illustrate get the metadata and print out index, label, output and prob to the terminal or cmd. Note that the admetadebuger does not pad any fake "output" information into metadata, so the output value is null.
Copy the built plugin libadgetclassification.so file to the plugin folder EVA installed, here used EVA_ROOT to preset the installed path of EVASDK. Then run the pipeline command for testing:
$ gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=0 id=187 class=boy prob=0.876 ! adgetclassification ! fakesink
and you will see the message shows the inference result frame by frame in the terminal or cmd as bellow:
there are 1 results.
*********** classification result #1
index = 187
output =
label = boy
prob = 0.876
Note that the admetadebuger does not pad any fake "output" information into metadata, so the output value is null.