Through this sample code, you can learn:
This sample shows how to retrieve the adlink metadata inside the element. The target created pipeline command is:
$ gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=1 id=187 class=boy prob=0.876 x1=0.1 y1=0.2 x2=0.3 y2=0.4 ! adgetobjectdetection ! fakesink
This command is used to simulate inferenced object detection result by admetadebuger and the element(this sample code), adgetobjectdetection, retrieve the object detection data from admetadata and then print out in terminal.
This sample shows how to retrieve the adlink metadata inside the element. First, include the gstadmeta.h:
#include "gstadmeta.h"
Second, Add the subfunction for retrieving the GstAdBatchMeta pointer from GstBuffer:
GstAdBatchMeta* gst_buffer_get_ad_batch_meta(GstBuffer* buffer)
{
gpointer state = NULL;
GstMeta* meta;
const GstMetaInfo* info = GST_AD_BATCH_META_INFO;
while ((meta = gst_buffer_iterate_meta (buffer, &state)))
{
if (meta->info->api == info->api)
{
GstAdMeta *admeta = (GstAdMeta *) meta;
if (admeta->type == AdBatchMeta)
return (GstAdBatchMeta*)meta;
}
}
return NULL;
}
Third, in the virtual method, ad_get_object_detection_transform_frame_ip, of this sample has an extra code block, getObjectDetectionData, which is going to get the classification metadata:
GstAdBatchMeta *meta = gst_buffer_get_ad_batch_meta(buffer);
if (meta == NULL)
GST_MESSAGE("Adlink metadata is not exist!");
else
{
AdBatch &batch = meta->batch;
bool frame_exist = batch.frames.size() > 0 ? true : false;
if(frame_exist)
{
VideoFrameData frame_info = batch.frames[0];
int detectionResultNumber = frame_info.detection_results.size();
std::cout << "detection result number = " << detectionResultNumber << std::endl;
for(int i = 0 ; i < detectionResultNumber ; ++i)
{
std::cout << "========== metadata in application ==========\n";
std::cout << "Class = " << frame_info.detection_results[i].obj_id << std::endl;
std::cout << "Label = " << frame_info.detection_results[i].obj_label << std::endl;
std::cout << "Prob = " << frame_info.detection_results[i].prob << std::endl;
std::cout << "(x1, y1, x2, y2) = ("
<< frame_info.detection_results[i].x1 << ", "
<< frame_info.detection_results[i].y1 << ", "
<< frame_info.detection_results[i].x2 << ", "
<< frame_info.detection_results[i].y2 << ")" << std::endl;
std::cout << "=============================================\n";
}
}
}
The metadata structure could be find in Edge Vision Analytics SDK Programming Guide : How to Use ADLINK Metadata in Chapter 5. Or can be found in the files:
Based on the structure, AdBatch can get the frame in vector and the inference data are stored in each frame based on the inference type: classification, detection, segmentation or openpose. Here, the sample use object detecion to illustrate get the metadata and print out obj_id, obj_label, prob and the box coordination to the terminal or cmd.
Copy the built plugin libadgetobjectdetection.so file to the plugin folder EVA installed, here used EVA_ROOT to preset the installed path of EVASDK. Then run the pipeline command for testing:
$ gst-launch-1.0 videotestsrc ! video/x-raw, format=BGR, width=320, height=240, framerate=30/1 ! admetadebuger type=1 id=187 class=boy prob=0.876 x1=0.1 y1=0.2 x2=0.3 y2=0.4 ! adgetobjectdetection ! fakesink
and you will see the message shows the inference result frame by frame in the terminal or cmd as bellow:
detection result number = 1
========== metadata in application ==========
Class = 187
Label = boy
Prob = 0.876
(x1, y1, x2, y2) = (0.1, 0.2, 0.3, 0.4)
=============================================