Through this sample code, you can learn:
This sample code first create three command pipelines in an application:
➤ First pipeline created in thread:
appsrc ! clockoverlay ! videoconvert ! appsink
➤ Second pipeline for capturing from webcam through OpenCV VideoCapture:
for windows:
appsrc ! videoconvert ! video/x-raw, format=BGR, width=640, height=480, framerate=30/1 ! videoconvert ! d3dvideosink sync=false
for linux:
appsrc ! videoconvert ! video/x-raw, format=BGR, width=640, height=480, framerate=30/1 ! videoconvert ! ximagesink sync=false
➤ last pipeline for writing to another pipeline through OpenCV VideoWriter:
for windows:
appsrc ! videoconvert ! video/x-raw, format=BGR, width=640, height=480, framerate=30/1 ! videoconvert ! d3dvideosink sync=false
for linux:
appsrc ! videoconvert ! video/x-raw, format=BGR, width=640, height=480, framerate=30/1 ! videoconvert ! ximagesink sync=false
VideoCapture read frame then push to buffer, grabVec. The appsrc will notice callback function, cb_need_data, to feed image data and then continuesly processing downstream in the pipeline, here clockoverlay, and end with appsink. The element appsink will ask the callback function, new_sample, to retrieve the final image data then push to the buffer, pipeLineOutputVec. Each processed image will save next to the binary file in the same folder.
OpenCV provides a convenient way for developers wanting to utilize their own API, algorithm, or unique processing. Based on the sample in "Generate a basic pipeline in opencv", another pipeline in the thread can be created to request user padding frame data to overlay clock information on the top-left of the frame via the GStreamer clockoverlay element.
The establish_appsrc_appsink_pipeline function builds the pipeline:
appsrc ! clockoverlay ! videoconvert ! appsink
shown in the code fragment below:
static void establish_appsrc_appsink_pipeline()
{
/* init GStreamer */
gst_init (NULL, NULL);
loop = g_main_loop_new (NULL, FALSE);
/* setup pipeline */
pipeline = gst_pipeline_new ("pipeline");
appsrc = gst_element_factory_make ("appsrc", "source");
clockoverlay = gst_element_factory_make("clockoverlay", "clockoverlay");
conv = gst_element_factory_make ("videoconvert", "conv");
appsink = gst_element_factory_make ("appsink", "appsink");
/* setup */
g_object_set (G_OBJECT (appsrc), "caps",
gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, "BGR",
"width", G_TYPE_INT, 640, "height", G_TYPE_INT, 480,
"framerate", GST_TYPE_FRACTION, 30, 1,NULL), NULL);
gst_bin_add_many (GST_BIN (pipeline), appsrc, clockoverlay, conv, appsink, NULL);
gst_element_link_many (appsrc, clockoverlay, conv, appsink, NULL);
/* setup appsrc */
g_object_set (G_OBJECT (appsrc), "stream-type", 0, "format", GST_FORMAT_TIME, NULL);
g_signal_connect (appsrc, "need-data", G_CALLBACK (cb_need_data), NULL);
/* setup appsink */
g_object_set (G_OBJECT(appsink), "emit-signals", TRUE, NULL);
g_signal_connect (appsink, "new-sample", G_CALLBACK (new_sample), NULL);
/* play */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
//g_main_loop_run (loop); // for main loop
while(true)
{
this_thread::sleep_for(chrono::milliseconds(10));
}
free_appsrc_appsink_pipeline();
}
Refer to the GStreamer tutorials for more information on how to build the pipeline or our sample "Generate a basic pipeline". Here the appsrc and appsink signal properties connect through the GObject API in:
/* setup appsrc */
g_object_set (G_OBJECT (appsrc), "stream-type", 0, "format", GST_FORMAT_TIME, NULL);
g_signal_connect (G_OBJECT (appsrc), "need-data", G_CALLBACK (cb_need_data), NULL);
/* setup appsink */
g_object_set (G_OBJECT(appsink), "emit-signals", TRUE, NULL);
g_signal_connect (appsink, "new-sample", G_CALLBACK (new_sample), NULL);
First set the appsrc' property "stream-type" to push mode. Then hooks the cb_need_data callback function to need-data to wait for the appsrc notification to feed the data and then push it to appsrc. Below simplified the cb_need_data:
static void cb_need_data(GstElement *appsrc, guint unused_size, gpointer user_data)
{
// ......
// code omit
memcpy((guchar *)map.data, grabframe.data, gst_buffer_get_size(buffer));
// ......
// code omit
g_signal_emit_by_name (appsrc, "push-buffer", buffer, &ret);
}
Once the function is called back, the data for appsrc's buffer can be padded and then the signal called to push the data to appsrc. Similarly, set the appsink' property "emit-signals" to ejection mode. Then hooks the new_sample callback function to wait for the notification to access the output frame data sample. Below simplified the new_sample:
static GstFlowReturn new_sample(GstElement *sink, gpointer *udata)
{
GstSample *sample;
g_signal_emit_by_name (sink, "pull-sample", &sample);
if (sample)
{
// ......
// code omit
memcpy(img.data, (guchar *)map.data, gst_buffer_get_size(buffer));
pipeLineOutputVec.push_back(img.clone());
gst_buffer_unmap(buffer, &map);
gst_buffer_unref (buffer);
return GST_FLOW_OK;
}
return GST_FLOW_ERROR;
}
As long as appsink indicates the sample is ready and accessible, the data can be gotten from appsink's buffer and push to pipeLineOutputVec. In the capture while loop, there will check this pipeLineOutputVec exists data or not:
if(pipeLineOutputVec.size() > 0)
{
imwrite("a.bmp", pipeLineOutputVec[0]);
pipeLineOutputVec.erase(pipeLineOutputVec.begin());
}
If exists image data, then save to a.bmp next to this sample binary.
Relative to "Generate a basic pipeline in opencv", this sample is provided for those who required leverage to GStreamer's elements, like clock overlay, to process the frame data and then return to the application. Both "Generate a basic pipeline in opencv" and this sample have introduced effective ways to integrate custom applications with GStreamer. For more information on the usage of appsrc and appsink, refer to the GStreamer tutorials.
Go to the folder of the binary and run binary in terminal or cmd:
$ ./get-stream-data-from-pipeline
and you will see the two display windows and one saved file:


