Making Time-lapse Videos

From MyLabWiki
Revision as of 13:50, 19 January 2011 by Alex (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This page describes the setups and techniques I am using to capture and create time-lapse videos.

Overview

The basic idea is very simple: Capture each frame of the video as individual frame of the video as an image and assemble the images into a video. Thus there are are two necessary parts in this process:

  1. A camera that can be controlled to capture images at regular interval. The delay between the frames depends on the length-ratio between the real-time event and the playback. For example, you want to create a time-lapse video with 25 fps of an event that lasts for 15 hours and you want the playback length to be 3 minutes. For a 3 minute video you'll need 4500 frames and over 15 hours this corresponds to capturing one frame every 12 seconds.
  1. Video encoder software that can take the recorded frames and encode them into a video. A good video encoder will also allow post processing, e.g. scaling and cropping.

An alternative technique can be used where one uses a regular video camera to record a standard video and converts the captured video using fast-playback. This technique is only feasible for short events up to an hours or so.

Logitech QuickCam Pro 9000

The Logitech QuickCam Pro 9000 is UVC compatible[1] and therefore works extremely well with recent Linux kernels.

The control software to use is the Gtk+ UVC Viewer.

Post-processing using FFmpeg

Processing is done in Linux using FFmpeg[2].

To convert the images called Image-1.jpg, Image-2.jpg, ... to a video in .mov format:

 ffmpeg -i Image-%d.jpg -sameq -r 25 timelapse.mov   (FIXME: Use -b instead)

When using a webcam with Gstreamer we can record a video at 1 fps. It would be nice if we could time-lapse it by simply treating it as 25fps input, something like this:

 ffmpeg -r 25 input.ogg -sameq -r 25 output.ogg

Note the using the -r</code< option on the input only works with raw video streams. If we have the video in an OGG container we have to convert it to individual frames first and then assemble the frames into a new video.

References

  1. http://quickcamteam.net/
  2. http://ffmpeg.org/