Skip to content

Capture setup and examples

asquared edited this page Feb 2, 2011 · 4 revisions

Capture Setup and Examples

This could be wrong. I haven't tested the capture card sections yet. Let me know if things break via the issue tracker.

Quick Start

  • Create a buffer file: dd if=/dev/zero bs=1 count=1 seek=SIZE of=BUFFER_FILE. SIZE is the buffer size e.g. 20G, 512M.
  • Direct capture from Video4Linux2 device: v4l2_ingest /dev/videoX BUFFER_FILE
  • Direct capture from Blackmagic device: decklink_ingest CARD_NUMBER BUFFER_FILE. CARD_NUMBER determines which card to use - use 0 for the first card, 1 for the second, etc...
  • Capture from other source: SOME_MJPEG_SOURCE | mjpeg_ingest BUFFER_FILE. Specify -e, -o, and/or -p options to mjpeg_ingest as necessary.

Basic Idea

We want to get an M-JPEG stream from the video source somehow. This is then piped into the mjpeg_ingest program, which starts writing the stream into a circular buffer. So two things must be done to set up capture: first, you need to create a circular buffer. Second, you need to generate the MJPEG stream and pipe it into mjpeg_ingest. mjpeg_ingest is a great source of flexibility in openreplay, allowing it to capture from almost any video source. But generating the necessary input can be confusing, so this page contains some examples of how to do so for some common video sources.

Buffer Setup

Use the 'dd' command: dd if=/dev/zero of=<your_buffer_filename> bs=1 count=1 seek=<buffer_size>. Replace <your_buffer_filename> with the filename you want to use. Replace <buffer_size> with the size of the buffer you want to create. (Suffixes such as M and G are allowed. 1G is a good starting point.)

Streaming Input

ffmpeg is a useful utility here, so what follows is a brief tutorial. If you've built openreplay successfully, you should already have it installed. ffmpeg converts video files and streams on the fly, and it works like so: ffmpeg <input options> -i <input file> -f <output_format> <output options> <output file>. And like most UNIX programs, the input and output files can be pipes. So we'll pipe our captured video into ffmpeg and use it to convert the video to M-JPEG. The command to do that in the general case looks a bit like: ffmpeg -i - -f mjpeg -s 720x480 -qscale <X> -. The dashes represent that we're using this thing in a pipe. -s 720x480 tells ffmpeg to scale the video to 720x480. That size is hard-coded into most of openreplay (at least for now), so we'll use it to avoid scaling operations later down the processing pipeline. -qscale <X> sets the encoding quality level. Lower numbers for X represent higher-quality video, and thus higher disk-bandwidth, storage, and memory requirements. Higher numbers compress the video more.

Capture via DeckLink video capture cards

First, let's capture some video from a capture card using openreplay's decklink_capture program. Assuming you're in the openreplay/core directory, do ./decklink_capture 0 | ffmpeg -f rawvideo -pix_fmt uyvy422 -s 720x486 -i - -f mjpeg -qscale 4 - > test.mjpg. Seems rather complicated but all we're doing is telling ffmpeg to convert raw UYVY422 component video at 720x486 coming in to M-JPEG video going out. Instead of running this into a buffer via mjpeg_ingest, we're just piping it to a file to see if it worked. If you see a counter counting up, it's working! If you see !!! NO SIGNAL !!! scrolling by instead, you need to reconfigure the Blackmagic capture card via the provided "Blackmagic Control Panel" utility, or check your input connections. Important Note: Don't scale down video using FFmpeg if you plan on deinterlacing.

Check that it worked: mplayer test.mjpg should play back some of the recorded video. If it does, great! If not, check the settings. The 720x486 dimension, for one, could prove problematic if you're trying to use something other than NTSC.

If it worked, just change > test.mjpg to | ./mjpeg_ingest <your_buffer_file> to get rolling. If all systems are go, you should see FFmpeg's frame counter start ticking off frames. If you see NO SIGNAL messages, troubleshoot as before. If it just freezes up, try deleting the buffer file, recreating it, and starting over. Sometimes they get deadlocked. (If it does get deadlocked, that could be a sign of a bug, so you should report it.)

Capture via FireWire device

Here, we use the dvgrab command to acquire the video stream from a DV FireWire camera. It can be done like this: dvgrab - | ffmpeg -i - -f mjpeg -s 720x480 -qscale 4 - | ./mjpeg_ingest <whatever_buffer_file>. Note that -f rawvideo -pix_fmt uyvy422 can be dropped since FFmpeg will automatically recognize DV input data.

Remote Capture

This relies on a neat trick of the ssh command: output from remote commands can be piped through it easily. Setting up ssh properly is beyond the scope of this document, but on an Ubuntu LiveCD it's enough to do sudo apt-get install openssh-server and sudo passwd root for a quick and dirty setup. Use a strong password, to avoid possible security risks.

Something like the following suffices to capture video from a FireWire camera on a remote laptop: ssh username@hostname_of_remote_laptop 'dvgrab - | ffmpeg -i - -f mjpeg -s 720x480 -qscale 4 -' | ./mjpeg_ingest <whatever_buffer_file>. Note that dvgrab and ffmpeg are being run on the remote machine here while mjpeg_ingest happens locally. This could be moved around if you didn't want to install ffmpeg on the remote machine, but it could increase the demand on your network. FFmpeg provides information about the video data rate next to that frame counter, as well.

mjpeg_ingest and Interlacing

Openreplay will attempt to play some games to smooth out slow motion. In order for this to work properly, it must know about the "field dominance" of your video.

Most cameras capture interlaced video, meaning that they capture an image with half the vertical resolution at 50 or 60 fields per second, instead of capturing the entire image at 25 or 30 frames per second. That means that we can trade off some vertical resolution and get smooth slow-motion instant replays. This works by a method called "scan doubling": converting one frame into two by splitting it into its constituent fields, then attempting to re-create the missing lines. For this process to work properly, the program needs to know whether the even or odd-numbered lines are meant to be displayed first. Otherwise, every other frame will be out of order.

mjpeg_ingest can consume interlaced input in two different ways: as discrete fields or as complete interlaced frames. If you are providing it discrete fields (720x240 JPEG images in the case of NTSC), run mjpeg_ingest with the '-e' or '-o' options to let it know whether the even field or the odd field is coming in first. If you're providing it with interlaced frames (i.e. 720x480 JPEG images for NTSC), run it with no options, or specify '-p' if the default field dominance is incorrect. (This is obvious if you frame-step through some video: in one step, everything moves forward, while in the next, it all moves slightly backward.)