Difference between revisions of "Gstreamer cheat sheet"

From MyLabWiki
Jump to: navigation, search
m (Single frame capture)
m (Multiple Streams)
 
(37 intermediate revisions by one user not shown)
Line 42: Line 42:
 
   gst-launch v4l2src ! video/x-raw-yuv,'''format=\(fourcc\)YUY2''',width=320,height=240 ! xvimagesink
 
   gst-launch v4l2src ! video/x-raw-yuv,'''format=\(fourcc\)YUY2''',width=320,height=240 ! xvimagesink
  
YUY2 is the standard YUYV 4:2:2 pixel format and corresponds to the YUYV format on the [[Logitech QuickCam Pro 9000]].
+
YUY2 is the standard YUYV 4:2:2 pixel format and corresponds to the YUYV format on the [[Logitech QuickCam Pro 9000]]. For the other formats supported by the Logitech [[:Category:Cameras|cameras]] see [[Pixel formats]].
  
 
The camera settings can be controlled using Guvcview<ref>Gtk+ UVC Viewer: http://guvcview.berlios.de/</ref> while the image is captured using gstremer. This requires guvcview to be executed using the <code>-o</code> or <code>--control_only</code> command line option.
 
The camera settings can be controlled using Guvcview<ref>Gtk+ UVC Viewer: http://guvcview.berlios.de/</ref> while the image is captured using gstremer. This requires guvcview to be executed using the <code>-o</code> or <code>--control_only</code> command line option.
Line 54: Line 54:
 
For quick cropping from 4:3 to 16:9, the <code>aspectratiocrop</code><ref name"gst-aspectratio>GStreamer Good Plugins 0.10 Plugins Reference Manual &ndash; [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-aspectratiocrop.html aspectratiocrop]</ref> plugin can be used:
 
For quick cropping from 4:3 to 16:9, the <code>aspectratiocrop</code><ref name"gst-aspectratio>GStreamer Good Plugins 0.10 Plugins Reference Manual &ndash; [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-aspectratiocrop.html aspectratiocrop]</ref> plugin can be used:
  
   gst-launch v4l2src ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! aspectratiocrop aspect-ratio=16/9 ! ffmpegcolorspace ! xvimagesink
+
   gst-launch v4l2src ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! aspectratiocrop aspect-ratio=16/9 ! \
 +
            ffmpegcolorspace ! xvimagesink
  
  
Line 60: Line 61:
  
 
<code>videoscale</code>
 
<code>videoscale</code>
 
  
 
== Filtering ==
 
== Filtering ==
Line 81: Line 81:
  
 
Encode video to H.264 using x264 and put it into MPEG-TS transport stream:
 
Encode video to H.264 using x264 and put it into MPEG-TS transport stream:
   gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc ! flutsmux ! filesink location=test.ts
+
   gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc ! \
 +
                mpegtsmux ! filesink location=test.ts
  
 
Note that it requires the Fluendo TS Muxer gst-fluendo-mpegmux for muxing and gst-fluendo-mpegdemux for demuxing. The <code>-e</code> option forces EOS on sources before shutting the pipeline down. This is useful when we write to files and want to shut down by killing gst-launch using CTRL+C or with the kill command<ref>Elphel Development Blog &ndash; [http://blogs.elphel.com/2009/11/interfacing-elphel-cameras-with-gstreamer-opencv-opengl-and-python-get-profit-of-dsp-or-gpu-based-optimisation-control-camera-settings-from-python-application-or-human-interface-device-hid/ Interfacing Elphel cameras with GStreamer, OpenCV, OpenGL/GLSL and python.]</ref>. Alternatively, we could use the <code>num-buffers</code> parameter to specify that we only want to record a certain number of frames. The following graph will record 500 frames and then stop:
 
Note that it requires the Fluendo TS Muxer gst-fluendo-mpegmux for muxing and gst-fluendo-mpegdemux for demuxing. The <code>-e</code> option forces EOS on sources before shutting the pipeline down. This is useful when we write to files and want to shut down by killing gst-launch using CTRL+C or with the kill command<ref>Elphel Development Blog &ndash; [http://blogs.elphel.com/2009/11/interfacing-elphel-cameras-with-gstreamer-opencv-opengl-and-python-get-profit-of-dsp-or-gpu-based-optimisation-control-camera-settings-from-python-application-or-human-interface-device-hid/ Interfacing Elphel cameras with GStreamer, OpenCV, OpenGL/GLSL and python.]</ref>. Alternatively, we could use the <code>num-buffers</code> parameter to specify that we only want to record a certain number of frames. The following graph will record 500 frames and then stop:
   gst-launch videotestsrc num-buffers=500 ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc ! flutsmux ! filesink location=test.ts
+
   gst-launch videotestsrc num-buffers=500 ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc \
 +
          ! mpegtsmux ! filesink location=test.ts
  
 
We can use the <code>playbin</code> plugin to play the recorded video:
 
We can use the <code>playbin</code> plugin to play the recorded video:
Line 92: Line 94:
  
 
By default <code>x264enc</code> will use 2048 kbps but this can be set to a different value:
 
By default <code>x264enc</code> will use 2048 kbps but this can be set to a different value:
gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! x264enc bitrate=512 ! flutsmux ! filesink location=test.ts
+
  gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! x264enc bitrate=512 ! \
 +
                mpegsmux ! filesink location=test.ts
  
 
<code>bitrate</code> is specified in kbps. Note that I've changed the size to 640x480. For H.264 (and most other modern codecs) it is advantageous to use width and height that is an integer multiple of 16. There are also many other options that can be used to optimize compression, quality and speed.
 
<code>bitrate</code> is specified in kbps. Note that I've changed the size to 640x480. For H.264 (and most other modern codecs) it is advantageous to use width and height that is an integer multiple of 16. There are also many other options that can be used to optimize compression, quality and speed.
Line 110: Line 113:
 
We can mux the test pattern and the webcam into one MPEG-TS stream. For this we first declare the muxer element and name it "muxer". The name is then used as reference when we connect to it:
 
We can mux the test pattern and the webcam into one MPEG-TS stream. For this we first declare the muxer element and name it "muxer". The name is then used as reference when we connect to it:
  
   gst-launch -e flutsmux name="muxer" ! filesink location=multi.ts \
+
   gst-launch -e mpegtsmux name="muxer" ! filesink location=multi.ts \
     v4l2src ! video/x-raw-yuv, format=\(fourcc\)YUY2, framerate=10/1, width=640, height=480 ! videorate ! ffmpegcolorspace ! x264enc ! muxer. \
+
     v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,framerate=10/1,width=640,height=480 ! videorate ! ffmpegcolorspace ! x264enc ! muxer. \
 
     videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer.
 
     videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer.
  
Line 135: Line 138:
 
     videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer. \
 
     videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer. \
 
     pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! muxer.
 
     pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! muxer.
 +
 +
The audio input device can be specified using the <code>device</code> property:
 +
 +
  gst-launch -e pulsesrc device="alsa_input.usb-046d_0809_52A63768-02.analog-mono" ! audioconvert ! \
 +
    lamemp3enc target=1 bitrate=64 cbr=true ! filesink location=audio.mp3
 +
 +
The list of valid audio device names can be seen in the listing provided by <code>pactl list</code> or using the command<ref>Pulseaudio FAQ &ndash; How do I record stuff? http://pulseaudio.org/wiki/FAQ#HowdoIrecordstuff</ref>:
 +
 +
  $ pactl list | grep -A2 'Source #' | grep 'Name: ' | cut -d" " -f2
 +
 +
Note that this will also list the monitoring pad of the audio output:
 +
 +
  $ pactl list | grep -A2 'Source #' | grep 'Name: ' | cut -d" " -f2
 +
  '''alsa_output.pci-0000_80_01.0.analog-stereo.monitor'''
 +
  alsa_input.pci-0000_80_01.0.analog-stereo
 +
  alsa_input.usb-046d_0809_52A63768-02.analog-mono
 +
 +
To list all monitor source we can use the command<ref>Pulseaudio FAQ &ndash; How do I record other programs' output? http://pulseaudio.org/wiki/FAQ#HowdoIrecordotherprogramsoutput</ref>:
 +
 +
  $ pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | cut -d" " -f2
  
 
== Decoding and Demuxing ==
 
== Decoding and Demuxing ==
Line 162: Line 185:
 
       This command would be run on the transmitter
 
       This command would be run on the transmitter
  
<span style="color:red">This example does not work for me!</span>
+
<span style="color:red">This example does not work for me!</span> See: http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README#n251
  
 
== Compositing ==
 
== Compositing ==
Line 175: Line 198:
 
[[Image:GstPipDefault.png|600px]]
 
[[Image:GstPipDefault.png|600px]]
  
According to the online documentation<ref name="videomixer"/> the poisition and Z-order can be adjusted using <code>GstVideoMixerPad</code> properties; however, I do not yet know how to use this.
+
==== GstVideoMixerPad ====
 +
According to the online documentation<ref name="videomixer"/> the position and Z-order can be adjusted using <code>GstVideoMixerPad</code> properties<ref name="gstvideomixerpad">GStreamer Good Plugins 0.10 Plugins Reference Manual &ndash; [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/GstVideoMixerPad.html GstVideoMixerPad]</ref>. These properties can be accessed using Python or C (see [http://www.oz9aec.net/index.php/gstreamer/354-gstreamer-compositing-using-the-gstvideomixerpad-properties this post]) or even from <code>gst-launch</code> using references to pads, namely <code>sink_i::xpos, sink_i::ypos, sink_i::alpha and sink_i::zorder</code>, where i is the input stream number starting from 0. The <code>GstVideoMixerPad</code> properties are specified together with the declaration of the <code>videomixer</code>:
  
We can also position the small picture by using the <code>videobox</code><ref name="videobox">GStreamer Good Plugins 0.10 Plugins Reference Manual &ndash; [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-videobox.html videobox]</ref> element and add a transparent border. The following example will move the small snow pattern 20 pixels to the right and 25 pixels down:
+
  gst-launch videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! \
 +
    videomixer name=mix sink_1::xpos=20 sink_1::ypos=20 sink_1::alpha=0.5 sink_1::zorder=3 sink_2::xpos=100 sink_2::ypos=100 sink_2::zorder=2 ! \
 +
    ffmpegcolorspace ! xvimagesink videotestsrc pattern=13 ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! mix. \
 +
    videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.
 +
 
 +
[[Image:GstVideoMixerPad.png|600px]]
 +
 
 +
Thanks to Stefan Kost for this very useful tip.
 +
 
 +
The order by which input streams are connected to <code>videomixer</code> inputs is deterministic though difficult to predict. We can have full control over which video stream is connected to which videomixer input by explicitly specifying the pads when we link:
 +
 
 +
  gst-launch \
 +
    videomixer name=mix sink_1::xpos=20 sink_1::ypos=20 sink_1::alpha=0.5 sink_1::zorder=3 sink_2::xpos=100 sink_2::ypos=100 sink_2::zorder=2 ! \
 +
    ffmpegcolorspace ! xvimagesink \
 +
    videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! '''mix.sink_0''' \
 +
    videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! '''mix.sink_1''' \
 +
    videotestsrc pattern=13 ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! '''mix.sink_2'''
 +
 
 +
Using this trick we can swap between the two small pictures by simply swapping <code>mix.sink_1</code> with <code>mix.sink_2</code>.
 +
 
 +
 
 +
==== VideoBox ====
 +
We can also move the small video around anywhere using the <code>videobox</code><ref name="videobox">GStreamer Good Plugins 0.10 Plugins Reference Manual &ndash; [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-videobox.html videobox]</ref> element with a transparent border. The <code>videobox</code> is inserted between the source video and the mixer:
 +
 
 +
[[Image:GstVideomixerDia.png|600px]]
 +
 
 +
The following pipeline will move the small snow pattern 20 pixels to the right and 25 pixels down:
  
 
   gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! '''videobox border-alpha=0 top=-20 left=-25''' ! \
 
   gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! '''videobox border-alpha=0 top=-20 left=-25''' ! \
Line 184: Line 234:
 
[[Image:GstPipPosition.png|600px]]
 
[[Image:GstPipPosition.png|600px]]
  
Note that the <code>top</code> and <code>left</code> values are negative, which means that pixels will be added. Positive value means that pixels are cropped from the original image. If we'd amde <code>border-alpha</code> 1.0 we'd seen a black border on the top and the left of the child image.
+
Note that the <code>top</code> and <code>left</code> values are negative, which means that pixels will be added. Positive value means that pixels are cropped from the original image. If we'd made <code>border-alpha</code> 1.0 we'd seen a black border on the top and the left of the child image.
  
 
Transparency of each input stream can be controlled by passing the stream through an [[#Alpha Filter|alpha filter]]. This is useful for the main (background) image. For the child image we do not need to add and additional alpha filter because the <code>videobox</code> can have it's own alpha channel:
 
Transparency of each input stream can be controlled by passing the stream through an [[#Alpha Filter|alpha filter]]. This is useful for the main (background) image. For the child image we do not need to add and additional alpha filter because the <code>videobox</code> can have it's own alpha channel:
Line 202: Line 252:
 
[[Image:GstPipBorder.png|600px]]
 
[[Image:GstPipBorder.png|600px]]
  
 +
==== Video Wall ====
 +
We can of course combine more than two incoming video streams. The following pipeline will take four incoming streams and mix them into a Video Matrix / Wall:
  
'''TODO:''' four videos combined into one video matrix
+
  gst-launch -e videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
 +
    videotestsrc pattern=1 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=0 left=0 ! mix. \
 +
    videotestsrc pattern=15 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=0 left=-320 ! mix. \
 +
    videotestsrc pattern=13 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=-180 left=0 ! mix. \
 +
    videotestsrc pattern=0 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=-180 left=-320 ! mix. \
 +
    videotestsrc pattern=3 ! video/x-raw-yuv, framerate=5/1, width=640, height=360 ! mix.
 +
 
 +
We had to use a fifth stream as large background stream.
 +
 
 +
[[Image:GstVideoWall.png]]
 +
 
 +
For a more complex example see [[Gstreamer_Cheat_Sheet#Video_Wall:_Live_from_Pluto|Video Wall: Live from Pluto]].
  
 
=== Text Overlay ===
 
=== Text Overlay ===
Line 249: Line 312:
  
 
   gst-launch -e v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=1280,height=720,framerate=5/1 ! \
 
   gst-launch -e v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=1280,height=720,framerate=5/1 ! \
     ffmpegcolorspace ! timeoverlay halign=right valign=top ! clockoverlay halign=left valign=top time-format="%Y/%m/%d %H:%M:%S" ! \
+
     ffmpegcolorspace ! \
     tee name="splitter" ! queue ! xvimagesink sync=false splitter. ! queue ! videorate ! video/x-raw-yuv,framerate=1/1 ! \
+
    timeoverlay halign=right valign=top ! clockoverlay halign=left valign=top time-format="%Y/%m/%d %H:%M:%S" ! \
     heoraenc bitrate=256 ! oggmux ! filesink location=webcam.ogg
+
     tee name="splitter" ! queue ! xvimagesink sync=false splitter. ! \
 +
    queue ! videorate ! video/x-raw-yuv,framerate=1/1 ! \
 +
     theoraenc bitrate=256 ! oggmux ! filesink location=webcam.ogg
 +
 
 +
''Note: We can replace theoraenc+oggmux with x264enc+someothermuxer but then the pipeline will freeze unless we make the <code>queue</code>
 +
<ref name="gst-queue">GStreamer Core Plugins 0.10 Plugins Reference Manual &ndash; [http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-queue.html queue]</ref> element in front of the <code>xvimagesink</code> leaky, i.e. "<code>queue leaky=1</code>".''
  
 
[[Image:GstSurveilance.png|600px]]
 
[[Image:GstSurveilance.png|600px]]
Line 267: Line 335:
 
See the [http://www.youtube.com/watch?v=6tQ1ewhGiwk time-lapse video on YouTube].
 
See the [http://www.youtube.com/watch?v=6tQ1ewhGiwk time-lapse video on YouTube].
  
The "time-lapse factor" can be controlled by setting the input rate. Since we recorded 1 fps and specified and input rate of 50 fps while assembling the time-lapse videos the effective time-lapse factor will be 0.5 fps corresponding to 2 seconds per frame. If we reduce the input framerate to 25, the time-lapse speed will be half, i.e. 1 seconf per frame:
+
The "time-lapse factor" can be controlled by setting the input rate. Since we recorded 1 fps and specified and input rate of 50 fps while assembling the time-lapse videos the effective time-lapse factor will be 0.5 fps corresponding to 2 seconds per frame. If we reduce the input framerate to 25, the time-lapse speed will be half, i.e. 1 second per frame:
  
 
   ffmpeg '''-r 25''' -i img/webcam-%05d.jpg -vcodec libx264 -b 5000k -r 25 timelapse.mov
 
   ffmpeg '''-r 25''' -i img/webcam-%05d.jpg -vcodec libx264 -b 5000k -r 25 timelapse.mov
Line 287: Line 355:
 
   ffmpeg -i timelapse.mp3 -r 100 -i img/frame%05d.png -sameq -r 50 -ab 320k timelapse.mp4
 
   ffmpeg -i timelapse.mp3 -r 100 -i img/frame%05d.png -sameq -r 50 -ab 320k timelapse.mp4
  
Note that this time I generated an MP4 video with 50 fps. You can see the result on YouTube or download the MP4 file here.
+
Note that this time I generated an MP4 video with 50 fps. You can watch the [http://www.youtube.com/watch?v=GE3rrjgHe98 result on YouTube] or download the MP4 file here.
  
 
[[Image:QCVP9kSample.png|600px]]
 
[[Image:QCVP9kSample.png|600px]]
 +
 +
 +
=== Video Wall: Live from Pluto ===
 +
 +
Our robotic spaceship has landed on Pluto and is ready to transmit awesome video from the three onboard cameras CAM1, CAM2 and CAM3. We want to show the images on a video wall with a nice background, something like this:
 +
 +
[[Image:GstPipExample.png|800px]]
 +
 +
We can accomplish this using [[Gstreamer_Cheat_Sheet#Picture_in_Picture|picture-in-picture]] compositing but it is now a little more complicated than the simple examples shown earlier. We have:
 +
* Three small video feeds of size 350x250 pixels
 +
* Each small video feed has a <code>textoverlay</code> showing CAMx
 +
* A large background 1280x720 pixels coming from a still image (JPG file)
 +
* A <code>textoverlay</code> saying "Live from Pluto" at the bottom left of the main screen
 +
* The three video feeds, CAM1, CAM2 and CAM3 are put on top of the main screen.
 +
 +
The diagram for the pipeline is shown below. The text above the arrows specifies the pixel formats for a given video stream in the pipeline. If the pipeline fails to launch due to an error that says something about ''streaming task paused, reason not-negotiated (-4)'', it is very often due to incompatible connection between two blocks. Gstreamer is not always very good at telling that.
 +
 +
[[Image:GstVideoWallDia.png|800px]]
 +
 +
And here is the complete pipeline as entered on the command line:
 +
 +
  gst-launch -e videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
 +
    videotestsrc pattern=0 ! video/x-raw-yuv, framerate=1/1, width=350, height=250 ! \
 +
      textoverlay font-desc="Sans 24" text="CAM1" valign=top halign=left shaded-background=true ! \
 +
      videobox border-alpha=0 top=-200 left=-50 ! mix. \
 +
    videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=1/1, width=350, height=250 ! \
 +
      textoverlay font-desc="Sans 24" text="CAM2" valign=top halign=left shaded-background=true ! \
 +
      videobox border-alpha=0 top=-200 left=-450 ! mix. \
 +
    videotestsrc pattern=13 ! video/x-raw-yuv, framerate=1/1, width=350, height=250 ! \
 +
      textoverlay font-desc="Sans 24" text="CAM3" valign=top halign=left shaded-background=true ! \
 +
      videobox border-alpha=0 top=-200 left=-850 ! mix. \
 +
    multifilesrc location="pluto.jpg" caps="image/jpeg,framerate=1/1" ! jpegdec ! \
 +
      textoverlay font-desc="Sans 26" text="Live from Pluto" halign=left shaded-background=true auto-resize=false ! \
 +
      ffmpegcolorspace ! video/x-raw-yuv,format=\(fourcc\)AYUV ! mix.
 +
 +
A few notes on the pipeline:
 +
* For the large background image, which is a still frame I wanted to use the <code>imagefreeze</code> block which generates a video stream from a single image file. Unfortunately, it seems that this block is very new and not in the gstreamer package that comes with Ubuntu 10.04. Thereofre, I had to do the trick with <code>multifilesrc</code> making it read the same file over and over again.
 +
* I had a hard time getting this pipeline work. Eventually I found out that my problems were due to incompatible caps in the <code>multifilesrc</code> part of the pipeline. That's why there is an extra color conversion block between the <code>textoverlay</code> and the <code>videomixer</code>.
 +
* The <code>videobox</code> elements are used to add a transparent border to the small video feed causing the real video to "move"
 +
 +
You can watch the Live from Pluto GStreamer video wall [http://www.youtube.com/watch?v=Uqx_26IOxs4 in action on YouTube].
  
 
== References ==
 
== References ==
Line 296: Line 405:
  
 
Other useful links:
 
Other useful links:
 +
* '''[http://gitorious.org/cameractls Camera controls] example software'''
 +
* GStreamer Plugin Writer's Guide: [http://www.gstreamer.net/data/doc/gstreamer/head/pwg/html/section-types-definitions.html List of Defined Types]
 
* http://www.twm-kd.com/computers/software/webcam-and-linux-gstreamer-tutorial/
 
* http://www.twm-kd.com/computers/software/webcam-and-linux-gstreamer-tutorial/
 
* http://www.cin.ufpe.br/~cinlug/wiki/index.php/Introducing_GStreamer
 
* http://www.cin.ufpe.br/~cinlug/wiki/index.php/Introducing_GStreamer

Latest revision as of 22:53, 4 January 2014

This page contains various shortcuts to achieving specific functionality using Gstreamer. These functionalities are mostly related to my Digital Video Transmission experiments. There is no easy to read "user manual" for gstreamer but the online the plugin documentation[1] often contains command line examples in addition to the API docs. Other sources of documentation:

  • The manual page for gst-launch
  • The gst-inspect tool
  • Online tutorials

The Gstreamer documentation is also available in Devhelp.

Contents

Video Test Source

To generate a test video stream use videotestsrc[2]:

 gst-launch videotestsrc ! ximagesink

Use the pattern property to select a specific pattern:

 gst-launch videotestsrc pattern=snow ! ximagesink

pattern can be both numeric [0,16] and symbolic. Some patterns can be adjusted using additional parameters.

To generate a test pattern of a given size and at a given rate a "caps filter" can be used:

 gst-launch videotestsrc ! video/x-raw-rgb, framerate=25/1, width=640, height=360 ! ximagesink

TODO: I'd like to add more about "caps filter" but I can not find any comprehensive documentation.

GstTestPattern.png

Webcam Capture

In its simplest form a v4l2src[3] can be connected directly to a video display sink:

 gst-launch v4l2src ! xvimagesink

This will grab the images at the highest possible resolution, which for my Logitech QuickCam Pro 9000 is 1600x1200. Adding a "caps filter" in between we can select the size and the desired frametrate:

 gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240,framerate=20/1 ! xvimagesink

If the supported framerates are not good use videorate[4] to either insert or drop frames. This can also be used to deliver a fixed framerate in case the framerate from the camera varies.

The "caps filter" is also used to select a specific pixel format. The Logitech QuickCam Pro 9000 supports MJPG, YUYV, RGB3, BGR3, YU12 and YV12. The pixel format in the "caps filter" can be specified using fourcc[5] labels:

 gst-launch v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=320,height=240 ! xvimagesink

YUY2 is the standard YUYV 4:2:2 pixel format and corresponds to the YUYV format on the Logitech QuickCam Pro 9000. For the other formats supported by the Logitech cameras see Pixel formats.

The camera settings can be controlled using Guvcview[6] while the image is captured using gstremer. This requires guvcview to be executed using the -o or --control_only command line option.

GstWebcamGuvcview.png

Resizing and Cropping

videocrop

For quick cropping from 4:3 to 16:9, the aspectratiocrop[7] plugin can be used:

 gst-launch v4l2src ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! aspectratiocrop aspect-ratio=16/9 ! \
            ffmpegcolorspace ! xvimagesink


videobox

videoscale

Filtering

ffmpegcolorspace

gamma

videobalance

smpte - transitions

smptealpha - PiP transparency using SMPTE transition patterns.

Encoding and Muxing

Single Stream

Test Pattern

Encode video to H.264 using x264 and put it into MPEG-TS transport stream:

 gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc ! \
               mpegtsmux ! filesink location=test.ts

Note that it requires the Fluendo TS Muxer gst-fluendo-mpegmux for muxing and gst-fluendo-mpegdemux for demuxing. The -e option forces EOS on sources before shutting the pipeline down. This is useful when we write to files and want to shut down by killing gst-launch using CTRL+C or with the kill command[8]. Alternatively, we could use the num-buffers parameter to specify that we only want to record a certain number of frames. The following graph will record 500 frames and then stop:

 gst-launch videotestsrc num-buffers=500 ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc \
          ! mpegtsmux ! filesink location=test.ts

We can use the playbin plugin to play the recorded video:

 gst-launch -v playbin uri=file:///path/to/test.ts

The -v option allows us to see which blocks gstreamer decides to use. In this case it will automatically select flutsdemux for demuxing the MPEG-TS and ffdec_h264 for decoding the H.264 video. Note that there appears to be no x264dec and no ffenc_h264.

By default x264enc will use 2048 kbps but this can be set to a different value:

 gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! x264enc bitrate=512 ! \
               mpegsmux ! filesink location=test.ts

bitrate is specified in kbps. Note that I've changed the size to 640x480. For H.264 (and most other modern codecs) it is advantageous to use width and height that is an integer multiple of 16. There are also many other options that can be used to optimize compression, quality and speed.

TODO: Find good settings for (1) high quality (2) fast compression (3) etc...

TODO: There is also the ffmux_mpegts but I can not make it work; it generates a 564 bytes long file.

Webcam

If we want to encode the webcam we need to include the ffmpegcolorspace[9] converter block:

 gst-launch -e v4l2src ! video/x-raw-yuv, framerate=10/1, width=320, height=240 ! ffmpegcolorspace ! \
               x264enc bitrate=256 ! flutsmux ! filesink location=webcam.ts

Multiple Streams

We can mux the test pattern and the webcam into one MPEG-TS stream. For this we first declare the muxer element and name it "muxer". The name is then used as reference when we connect to it:

 gst-launch -e mpegtsmux name="muxer" ! filesink location=multi.ts \
   v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,framerate=10/1,width=640,height=480 ! videorate ! ffmpegcolorspace ! x264enc ! muxer. \
   videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer.

We can play the recorded multi.ts file with any MPEG-TS capable player:

  • VLC will play both channels at the same time in different windows.
  • Mplayer will show one stream and we can swap between the streams using the "_" key.
  • gst-launch playbin uri=/path/to/multi.ts will play one stream

TODO: Should be able to get both streams in gstreamer but might require some magic.

Adding Audio

Capturing and encoding audio is really easy:

 gst-launch -e pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! filesink location=audio.mp3

This will record an MP3 audio using 64kbps CBR. Of course, we would prefer OGG or format, but the MPEG-TS we want to mux into only supports MPEG audio for now (this is a limitation of the flutsmux plugin I believe).

To include the recorded audio in the MUX we simply include it in the pipeline and replace the file sink with the muxer:

 gst-launch -e flutsmux name="muxer" ! filesink location=multi.ts \
   v4l2src ! video/x-raw-yuv, format=\(fourcc\)YUY2, framerate=10/1, width=640, height=480 ! videorate ! ffmpegcolorspace ! x264enc ! muxer. \
   videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer. \
   pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! muxer.

The audio input device can be specified using the device property:

 gst-launch -e pulsesrc device="alsa_input.usb-046d_0809_52A63768-02.analog-mono" ! audioconvert ! \
   lamemp3enc target=1 bitrate=64 cbr=true ! filesink location=audio.mp3

The list of valid audio device names can be seen in the listing provided by pactl list or using the command[10]:

 $ pactl list | grep -A2 'Source #' | grep 'Name: ' | cut -d" " -f2

Note that this will also list the monitoring pad of the audio output:

 $ pactl list | grep -A2 'Source #' | grep 'Name: ' | cut -d" " -f2
 alsa_output.pci-0000_80_01.0.analog-stereo.monitor
 alsa_input.pci-0000_80_01.0.analog-stereo
 alsa_input.usb-046d_0809_52A63768-02.analog-mono

To list all monitor source we can use the command[11]:

 $ pactl list | grep -A2 'Source #' | grep 'Name: .*\.monitor$' | cut -d" " -f2

Decoding and Demuxing

TBD


Network Streaming

TBD

MPEG-TS can be streamed over UDP (TBC)

Raw videos, e.g. H.264, can be packed into RTP before sending over UDP (TBC)

 From man gst-launch:
 Network streaming
 
      Stream video using RTP and network elements.
 
      gst-launch v4l2src ! video/x-raw-yuv,width=128,height=96,format='(fourcc)'UYVY ! ffmpegcolorspace ! ffenc_h263
      ! video/x-h263 ! rtph263ppay pt=96 ! udpsink host=192.168.1.1 port=5000 sync=false
      Use this command on the receiver
 
      gst-launch  udpsrc  port=5000 ! application/x-rtp, clock-rate=90000,payload=96 ! rtph263pdepay queue-delay=0 !
      ffdec_h263 ! xvimagesink
      This command would be run on the transmitter

This example does not work for me! See: http://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/gst/rtp/README#n251

Compositing

Picture in Picture

The videomixer[12] can be used to mix two or more video streams together forming a PiP effect. The following example will put a 200x150 pixels snow test pattern over a 640x360 pixels SMPTE pattern:

 gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! videomixer name=mix ! \
   ffmpegcolorspace ! xvimagesink videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.

GstPipDefault.png

GstVideoMixerPad

According to the online documentation[12] the position and Z-order can be adjusted using GstVideoMixerPad properties[13]. These properties can be accessed using Python or C (see this post) or even from gst-launch using references to pads, namely sink_i::xpos, sink_i::ypos, sink_i::alpha and sink_i::zorder, where i is the input stream number starting from 0. The GstVideoMixerPad properties are specified together with the declaration of the videomixer:

 gst-launch videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! \
   videomixer name=mix sink_1::xpos=20 sink_1::ypos=20 sink_1::alpha=0.5 sink_1::zorder=3 sink_2::xpos=100 sink_2::ypos=100 sink_2::zorder=2 ! \
   ffmpegcolorspace ! xvimagesink videotestsrc pattern=13 ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! mix. \
   videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.

GstVideoMixerPad.png

Thanks to Stefan Kost for this very useful tip.

The order by which input streams are connected to videomixer inputs is deterministic though difficult to predict. We can have full control over which video stream is connected to which videomixer input by explicitly specifying the pads when we link:

 gst-launch \
   videomixer name=mix sink_1::xpos=20 sink_1::ypos=20 sink_1::alpha=0.5 sink_1::zorder=3 sink_2::xpos=100 sink_2::ypos=100 sink_2::zorder=2 ! \
   ffmpegcolorspace ! xvimagesink \
   videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.sink_0 \
   videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! mix.sink_1 \
   videotestsrc pattern=13 ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! mix.sink_2

Using this trick we can swap between the two small pictures by simply swapping mix.sink_1 with mix.sink_2.


VideoBox

We can also move the small video around anywhere using the videobox[14] element with a transparent border. The videobox is inserted between the source video and the mixer:

GstVideomixerDia.png

The following pipeline will move the small snow pattern 20 pixels to the right and 25 pixels down:

 gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! videobox border-alpha=0 top=-20 left=-25 ! \
   videomixer name=mix ! ffmpegcolorspace ! xvimagesink videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.

GstPipPosition.png

Note that the top and left values are negative, which means that pixels will be added. Positive value means that pixels are cropped from the original image. If we'd made border-alpha 1.0 we'd seen a black border on the top and the left of the child image.

Transparency of each input stream can be controlled by passing the stream through an alpha filter. This is useful for the main (background) image. For the child image we do not need to add and additional alpha filter because the videobox can have it's own alpha channel:

 gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! \
   videobox border-alpha=0 alpha=0.6 top=-20 left=-25 ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
   videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.

GstPipAlpha.png

A border can be added around the child image by adding an additional videobox[14] where the top/left/right/bottom values correspond to the desired border width and border-alpha is set to 1.0 (opaque):

 gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! \
   videobox border-alpha=1.0 top=-2 bottom=-2 left=-2 right=-2 ! videobox border-alpha=0 alpha=0.6 top=-20 left=-25 ! \
   videomixer name=mix ! ffmpegcolorspace ! xvimagesink videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.

GstPipBorder.png

Video Wall

We can of course combine more than two incoming video streams. The following pipeline will take four incoming streams and mix them into a Video Matrix / Wall:

 gst-launch -e videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
   videotestsrc pattern=1 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=0 left=0 ! mix. \
   videotestsrc pattern=15 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=0 left=-320 ! mix. \
   videotestsrc pattern=13 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=-180 left=0 ! mix. \
   videotestsrc pattern=0 ! video/x-raw-yuv, framerate=5/1, width=320, height=180 ! videobox border-alpha=0 top=-180 left=-320 ! mix. \
   videotestsrc pattern=3 ! video/x-raw-yuv, framerate=5/1, width=640, height=360 ! mix.

We had to use a fifth stream as large background stream.

GstVideoWall.png

For a more complex example see Video Wall: Live from Pluto.

Text Overlay

The texoverlay[15] plugin can be used to add text to the video stream:

 gst-launch videotestsrc ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! textoverlay text="Hello" ! ffmpegcolorspace ! ximagesink

It has many options for text positioning and alignment. User can also specify font properties as a Pango font description string, e.g. "Sans Italic 24".

TODO: A few font description examples.

Time Overlay

Elapsed time can be added using the timeoverlay[16] plugin:

 gst-launch videotestsrc ! timeoverlay ! xvimagesink

Timeoverlay inherits the properties of textoverlay so the text properties can be set using the same properties:

 gst-launch -v videotestsrc ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! \
   timeoverlay halign=left valign=bottom text="Stream time:" shaded-background=true ! xvimagesink

GstTimeOverlay.png

Alternatively, cairotimeoverlay[17] can be used but it doesn't seem to have any properties:

 gst-launch videotestsrc ! cairotimeoverlay ! xvimagesink

Instead of elapsed time, the system date and time can be added using the clockoverlay[18] plugin:

 gst-launch videotestsrc ! clockoverlay ! xvimagesink

Clockloverlay also inherits the properties of textoverlay. In addition to that clockoverlay also allows setting the time format:

 gst-launch videotestsrc ! clockoverlay halign=right valign=bottom shaded-background=true time-format="%Y.%m.%D" ! ffmpegcolorspace ! ximagesink

Complete Examples

Time-Lapse Video

From recorded video

A simple "surveillance camera" implementation using the Logitech QuickCam Vision Pro 9000 and Gstreamer. Frames from the camera are captured at 5 fps. The date, time and elapsed time are added. The stream is displayed on the screen at the captured rate and resolution and saved to an OGG file at 1 fps:

 gst-launch -e v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=1280,height=720,framerate=5/1 ! \
   ffmpegcolorspace ! \
   timeoverlay halign=right valign=top ! clockoverlay halign=left valign=top time-format="%Y/%m/%d %H:%M:%S" ! \
   tee name="splitter" ! queue ! xvimagesink sync=false splitter. ! \
   queue ! videorate ! video/x-raw-yuv,framerate=1/1 ! \
   theoraenc bitrate=256 ! oggmux ! filesink location=webcam.ogg

Note: We can replace theoraenc+oggmux with x264enc+someothermuxer but then the pipeline will freeze unless we make the queue [19] element in front of the xvimagesink leaky, i.e. "queue leaky=1".

GstSurveilance.png

To create a time-lapse video we have to extract the individual frames from the recorded ogg file then assemble them to a new video using the new framerate.

Extract the individual frames:

 ffmpeg -i webcam.ogg -r 1 -sameq -f image2 img/webcam-%05d.jpg

Assemble frames to new video:

 ffmpeg -r 50 -i img/webcam-%05d.jpg -vcodec libx264 -b 5000k -r 25 timelapse.mov

See the time-lapse video on YouTube.

The "time-lapse factor" can be controlled by setting the input rate. Since we recorded 1 fps and specified and input rate of 50 fps while assembling the time-lapse videos the effective time-lapse factor will be 0.5 fps corresponding to 2 seconds per frame. If we reduce the input framerate to 25, the time-lapse speed will be half, i.e. 1 second per frame:

 ffmpeg -r 25 -i img/webcam-%05d.jpg -vcodec libx264 -b 5000k -r 25 timelapse.mov

See the half-speed time-lapse on YouTube.

Single frame capture

In the first example we recorded the captured frames to a theora encoded video file. In order to change the frame rate we had to convert the recorded video to single frames, then re-encode using ffmpeg. A better method is to omit the first video and capture directly to single frames. We can do this using the multifilesink[20] and multifilesrc[21].

This will record the camera stream to PNG files:

 gst-launch -e v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=1280,height=720,framerate=5/1 ! ffmpegcolorspace ! \
   timeoverlay halign=right valign=bottom ! clockoverlay halign=left valign=bottom time-format="%Y/%m/%d %H:%M:%S" ! \
   videorate ! video/x-raw-rgb,framerate=1/1 ! ffmpegcolorspace ! pngenc snapshot=false ! multifilesink location="frame%05d.png"

It was a quick hack and it might have been possible to avoid the two ffmpegcolorspace converters. We can convert to time-lapse video as before (this time I used different codec and framerate:

 ffmpeg -i timelapse.mp3 -r 100 -i img/frame%05d.png -sameq -r 50 -ab 320k timelapse.mp4

Note that this time I generated an MP4 video with 50 fps. You can watch the result on YouTube or download the MP4 file here.

QCVP9kSample.png


Video Wall: Live from Pluto

Our robotic spaceship has landed on Pluto and is ready to transmit awesome video from the three onboard cameras CAM1, CAM2 and CAM3. We want to show the images on a video wall with a nice background, something like this:

GstPipExample.png

We can accomplish this using picture-in-picture compositing but it is now a little more complicated than the simple examples shown earlier. We have:

  • Three small video feeds of size 350x250 pixels
  • Each small video feed has a textoverlay showing CAMx
  • A large background 1280x720 pixels coming from a still image (JPG file)
  • A textoverlay saying "Live from Pluto" at the bottom left of the main screen
  • The three video feeds, CAM1, CAM2 and CAM3 are put on top of the main screen.

The diagram for the pipeline is shown below. The text above the arrows specifies the pixel formats for a given video stream in the pipeline. If the pipeline fails to launch due to an error that says something about streaming task paused, reason not-negotiated (-4), it is very often due to incompatible connection between two blocks. Gstreamer is not always very good at telling that.

GstVideoWallDia.png

And here is the complete pipeline as entered on the command line:

 gst-launch -e videomixer name=mix ! ffmpegcolorspace ! xvimagesink \
   videotestsrc pattern=0 ! video/x-raw-yuv, framerate=1/1, width=350, height=250 ! \
     textoverlay font-desc="Sans 24" text="CAM1" valign=top halign=left shaded-background=true ! \
     videobox border-alpha=0 top=-200 left=-50 ! mix. \
   videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=1/1, width=350, height=250 ! \
     textoverlay font-desc="Sans 24" text="CAM2" valign=top halign=left shaded-background=true ! \
     videobox border-alpha=0 top=-200 left=-450 ! mix. \
   videotestsrc pattern=13 ! video/x-raw-yuv, framerate=1/1, width=350, height=250 ! \
     textoverlay font-desc="Sans 24" text="CAM3" valign=top halign=left shaded-background=true ! \
     videobox border-alpha=0 top=-200 left=-850 ! mix. \
   multifilesrc location="pluto.jpg" caps="image/jpeg,framerate=1/1" ! jpegdec ! \
     textoverlay font-desc="Sans 26" text="Live from Pluto" halign=left shaded-background=true auto-resize=false ! \
     ffmpegcolorspace ! video/x-raw-yuv,format=\(fourcc\)AYUV ! mix.

A few notes on the pipeline:

  • For the large background image, which is a still frame I wanted to use the imagefreeze block which generates a video stream from a single image file. Unfortunately, it seems that this block is very new and not in the gstreamer package that comes with Ubuntu 10.04. Thereofre, I had to do the trick with multifilesrc making it read the same file over and over again.
  • I had a hard time getting this pipeline work. Eventually I found out that my problems were due to incompatible caps in the multifilesrc part of the pipeline. That's why there is an extra color conversion block between the textoverlay and the videomixer.
  • The videobox elements are used to add a transparent border to the small video feed causing the real video to "move"

You can watch the Live from Pluto GStreamer video wall in action on YouTube.

References

  1. Gstreamer documentation http://gstreamer.freedesktop.org/documentation/
  2. GStreamer Base Plugins 0.10 Plugins Reference Manual – videotestsrc
  3. GStreamer Good Plugins 0.10 Plugins Reference Manual – v4l2src
  4. GStreamer Base Plugins 0.10 Plugins Reference Manual – videorate
  5. FOURCC websiteYUV formats
  6. Gtk+ UVC Viewer: http://guvcview.berlios.de/
  7. GStreamer Good Plugins 0.10 Plugins Reference Manual – aspectratiocrop
  8. Elphel Development Blog – Interfacing Elphel cameras with GStreamer, OpenCV, OpenGL/GLSL and python.
  9. GStreamer Base Plugins 0.10 Plugins Reference Manual – ffmpegcolorspace. The details are available via gst-inspect ffmpegcolorspace.
  10. Pulseaudio FAQ – How do I record stuff? http://pulseaudio.org/wiki/FAQ#HowdoIrecordstuff
  11. Pulseaudio FAQ – How do I record other programs' output? http://pulseaudio.org/wiki/FAQ#HowdoIrecordotherprogramsoutput
  12. 12.0 12.1 GStreamer Good Plugins 0.10 Plugins Reference Manual – videomixer
  13. GStreamer Good Plugins 0.10 Plugins Reference Manual – GstVideoMixerPad
  14. 14.0 14.1 GStreamer Good Plugins 0.10 Plugins Reference Manual – videobox
  15. GStreamer Base Plugins 0.10 Plugins Reference Manual – textoverlay
  16. GStreamer Base Plugins 0.10 Plugins Reference Manual – timeoverlay
  17. GStreamer Good Plugins 0.10 Plugins Reference Manual – cairotimeoverlay
  18. GStreamer Base Plugins 0.10 Plugins Reference Manual - clockoverlay
  19. GStreamer Core Plugins 0.10 Plugins Reference Manual – queue
  20. GStreamer Good Plugins 0.10 Plugins Reference Manual – multifilesink
  21. GStreamer Good Plugins 0.10 Plugins Reference Manual – multifilesrc

Other useful links: