Gstreamer cheat sheet

This page contains various shortcuts to achieving specific functionality using Gstreamer. These functionalities are mostly related to my Digital Video Transmission experiments. There is no easy to read "user manual" for gstreamer but the online the plugin documentation often contains command line examples in addition to the API docs. Other sources of documentation: The Gstreamer documentation is also available in Devhelp.
 * The manual page for gst-launch
 * The gst-inspect tool
 * Online tutorials

Video Test Source
To generate a test video stream use :

gst-launch videotestsrc ! ximagesink

Use the  property to select a specific pattern:

gst-launch videotestsrc pattern=snow ! ximagesink

can be both numeric [0,16] and symbolic. Some patterns can be adjusted using additional parameters.

To generate a test pattern of a given size and at a given rate a "caps filter" can be used:

gst-launch videotestsrc ! video/x-raw-rgb, framerate=25/1, width=640, height=360 ! ximagesink

TODO: I'd like to add more about "caps filter" but I can not find any comprehensive documentation.



Webcam Capture
In its simplest form a can be connected directly to a video display sink:

gst-launch v4l2src ! xvimagesink

This will grab the images at the highest possible resolution, which for my Logitech QuickCam Pro 9000 is 1600x1200. Adding a "caps filter" in between we can select the size and the desired frametrate:

gst-launch v4l2src ! video/x-raw-yuv,width=320,height=240,framerate=20/1 ! xvimagesink

If the supported framerates are not good use to either insert or drop frames. This can also be used to deliver a fixed framerate in case the framerate from the camera varies.

The "caps filter" is also used to select a specific pixel format. The Logitech QuickCam Pro 9000 supports MJPG, YUYV, RGB3, BGR3, YU12 and YV12. The pixel format in the "caps filter" can be specified using fourcc labels:

gst-launch v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=320,height=240 ! xvimagesink

YUY2 is the standard YUYV 4:2:2 pixel format and corresponds to the YUYV format on the Logitech QuickCam Pro 9000.

The camera settings can be controlled using Guvcview while the image is captured using gstremer. This requires guvcview to be executed using the  or   command line option.



Resizing and Cropping
For quick cropping from 4:3 to 16:9, the plugin can be used:

gst-launch v4l2src ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! aspectratiocrop aspect-ratio=16/9 ! ffmpegcolorspace ! xvimagesink

Test Pattern
Encode video to H.264 using x264 and put it into MPEG-TS transport stream: gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc ! flutsmux ! filesink location=test.ts

Note that it requires the Fluendo TS Muxer gst-fluendo-mpegmux for muxing and gst-fluendo-mpegdemux for demuxing. The  option forces EOS on sources before shutting the pipeline down. This is useful when we write to files and want to shut down by killing gst-launch using CTRL+C or with the kill command. Alternatively, we could use the  parameter to specify that we only want to record a certain number of frames. The following graph will record 500 frames and then stop: gst-launch videotestsrc num-buffers=500 ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! x264enc ! flutsmux ! filesink location=test.ts

We can use the  plugin to play the recorded video: gst-launch -v playbin uri=file:///path/to/test.ts

The  option allows us to see which blocks gstreamer decides to use. In this case it will automatically select  for demuxing the MPEG-TS and   for decoding the H.264 video. Note that there appears to be no  and no.

By default  will use 2048 kbps but this can be set to a different value: gst-launch -e videotestsrc ! video/x-raw-yuv, framerate=20/1, width=640, height=480 ! x264enc bitrate=512 ! flutsmux ! filesink location=test.ts

is specified in kbps. Note that I've changed the size to 640x480. For H.264 (and most other modern codecs) it is advantageous to use width and height that is an integer multiple of 16. There are also many other options that can be used to optimize compression, quality and speed.

TODO: Find good settings for (1) high quality (2) fast compression (3) etc...

TODO: There is also the  but I can not make it work; it generates a 564 bytes long file.

Webcam
If we want to encode the webcam we need to include the converter block: gst-launch -e v4l2src ! video/x-raw-yuv, framerate=10/1, width=320, height=240 ! ffmpegcolorspace ! \               x264enc bitrate=256 ! flutsmux ! filesink location=webcam.ts

Multiple Streams
We can mux the test pattern and the webcam into one MPEG-TS stream. For this we first declare the muxer element and name it "muxer". The name is then used as reference when we connect to it:

gst-launch -e flutsmux name="muxer" ! filesink location=multi.ts \ v4l2src ! video/x-raw-yuv, format=\(fourcc\)YUY2, framerate=10/1, width=640, height=480 ! videorate ! ffmpegcolorspace ! x264enc ! muxer. \   videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer.

We can play the recorded multi.ts file with any MPEG-TS capable player:
 * VLC will play both channels at the same time in different windows.
 * Mplayer will show one stream and we can swap between the streams using the "_" key.
 * will play one stream

TODO: Should be able to get both streams in gstreamer but might require some magic.

Adding Audio
Capturing and encoding audio is really easy:

gst-launch -e pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! filesink location=audio.mp3

This will record an MP3 audio using 64kbps CBR. Of course, we would prefer OGG or format, but the MPEG-TS we want to mux into only supports MPEG audio for now (this is a limitation of the  plugin I believe).

To include the recorded audio in the MUX we simply include it in the pipeline and replace the file sink with the muxer:

gst-launch -e flutsmux name="muxer" ! filesink location=multi.ts \ v4l2src ! video/x-raw-yuv, format=\(fourcc\)YUY2, framerate=10/1, width=640, height=480 ! videorate ! ffmpegcolorspace ! x264enc ! muxer. \   videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=480 ! x264enc ! muxer. \   pulsesrc ! audioconvert ! lamemp3enc target=1 bitrate=64 cbr=true ! muxer.

Decoding and Demuxing
TBD

Network Streaming
TBD

MPEG-TS can be streamed over UDP (TBC)

Raw videos, e.g. H.264, can be packed into RTP before sending over UDP (TBC)

From : Network streaming Stream video using RTP and network elements. gst-launch v4l2src ! video/x-raw-yuv,width=128,height=96,format='(fourcc)'UYVY ! ffmpegcolorspace ! ffenc_h263 ! video/x-h263 ! rtph263ppay pt=96 ! udpsink host=192.168.1.1 port=5000 sync=false Use this command on the receiver gst-launch udpsrc  port=5000 ! application/x-rtp, clock-rate=90000,payload=96 ! rtph263pdepay queue-delay=0 ! ffdec_h263 ! xvimagesink This command would be run on the transmitter

Picture in Picture
The can be used to mix two or more video streams together forming a PiP effect. The following example will put a 200x150 pixels snow test pattern over a 640x360 pixels SMPTE pattern:

gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! videomixer name=mix ! \   ffmpegcolorspace ! xvimagesink videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.



According to the online documentation the poisition and Z-order can be adjusted using  properties; however, I do not yet know how to use this.

We can also position the small picture by using the element and add a transparent border. The following example will move the small snow pattern 20 pixels to the right and 25 pixels down:

gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! videobox border-alpha=0 top=-20 left=-25 ! \   videomixer name=mix ! ffmpegcolorspace ! xvimagesink videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.



Note that the  and   values are negative, which means that pixels will be added. Positive value means that pixels are cropped from the original image. If we'd amde  1.0 we'd seen a black border on the top and the left of the child image.

Transparency of each input stream can be controlled by passing the stream through an alpha filter. This is useful for the main (background) image. For the child image we do not need to add and additional alpha filter because the  can have it's own alpha channel:

gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! \   videobox border-alpha=0 alpha=0.6 top=-20 left=-25 ! videomixer name=mix ! ffmpegcolorspace ! xvimagesink \ videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.



A border can be added around the child image by adding an additional where the top/left/right/bottom values correspond to the desired border width and   is set to 1.0 (opaque):

gst-launch -e videotestsrc pattern="snow" ! video/x-raw-yuv, framerate=10/1, width=200, height=150 ! \   videobox border-alpha=1.0 top=-2 bottom=-2 left=-2 right=-2 ! videobox border-alpha=0 alpha=0.6 top=-20 left=-25 ! \   videomixer name=mix ! ffmpegcolorspace ! xvimagesink videotestsrc ! video/x-raw-yuv, framerate=10/1, width=640, height=360 ! mix.



TODO: four videos combined into one video matrix

Text Overlay
The plugin can be used to add text to the video stream:

gst-launch videotestsrc ! video/x-raw-yuv,width=640,height=480,framerate=15/1 ! textoverlay text="Hello" ! ffmpegcolorspace ! ximagesink

It has many options for text positioning and alignment. User can also specify font properties as a Pango font description string, e.g..

TODO: A few font description examples.

Time Overlay
Elapsed time can be added using the plugin:

gst-launch videotestsrc ! timeoverlay ! xvimagesink

inherits the properties of  so the text properties can be set using the same properties:

gst-launch -v videotestsrc ! video/x-raw-yuv, framerate=25/1, width=640, height=360 ! \   timeoverlay halign=left valign=bottom text="Stream time:" shaded-background=true ! xvimagesink



Alternatively, can be used but it doesn't seem to have any properties:

gst-launch videotestsrc ! cairotimeoverlay ! xvimagesink

Instead of elapsed time, the system date and time can be added using the plugin:

gst-launch videotestsrc ! clockoverlay ! xvimagesink

also inherits the properties of. In addition to that  also allows setting the time format:

gst-launch videotestsrc ! clockoverlay halign=right valign=bottom shaded-background=true time-format="%Y.%m.%D" ! ffmpegcolorspace ! ximagesink

Time-Lapse Video
A simple "surveillance camera" implementation using the Logitech QuickCam Vision Pro 9000 and Gstreamer. Frames from the camera are captured at 5 fps. The date, time and elapsed time are added. The stream is displayed on the screen at the captured rate and resolution and saved to an OGG file at 1 fps:

gst-launch -e v4l2src ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=1280,height=720,framerate=5/1 ! \   ffmpegcolorspace ! timeoverlay halign=right valign=top ! clockoverlay halign=left valign=top time-format="%Y/%m/%d %H:%M:%S" ! \   tee name="splitter" ! queue ! xvimagesink sync=false splitter. ! queue ! videorate ! video/x-raw-yuv,framerate=1/1 ! \   heoraenc bitrate=256 ! oggmux ! filesink location=webcam.ogg



To create a time-lapse video we have to extract the individual frames from the recorded ogg file then assemble them to a new video using the new framerate.

Extract the individual frames:

ffmpeg -i webcam.ogg -r 1 -sameq -f image2 img/webcam-%05d.jpg

Assemble frames to new video:

ffmpeg -r 50 -i img/webcam-%05d.jpg -vcodec libx264 -b 5000k -r 25 timelapse.mov

See the time-lapse video on YouTube.