Skip to content

Commit

Permalink
rpicam-vid now waits for sychronisation before starting the timer
Browse files Browse the repository at this point in the history
Also some other minor corrections.
  • Loading branch information
davidplowman committed Feb 18, 2025
1 parent 8c05c55 commit 9088335
Show file tree
Hide file tree
Showing 5 changed files with 15 additions and 15 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ NOTE: `libcamera` does not yet provide stereoscopic camera support. When running

==== Software Camera Synchronisation

Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly.
Raspberry Pi's _libcamera_ implementation has the ability to synchronise the frames of different cameras using only software. This will cause one camera to adjust it's frame timing so as to coincide as closely as possible with the frames of another camera. No soldering or hardware connections are required, and it will work with all of Raspberry Pi's camera modules, and even third party ones so long as their drivers implement frame duration control correctly.

**How it works**

Expand All @@ -28,7 +28,7 @@ When cameras are on different devices, the system clocks should be synchronised

**The Server**

The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "sychronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this.
The server, as previously explained, broadcasts timing messages onto the network, by default every second. The server will run for a fixed number of frames, by default 100, after which it will inform the camera application on the device that the "synchronisation point" has been reached. At this moment, the application will start using the frames, so in the case of `rpicam-vid`, they will start being encoded and recorded. Recall that the behaviour and even existence of clients has no bearing on this.

If required, there can be several servers on the same network so long as they are broadcasting timing messages to different network addresses. Clients, of course, will have to be configured to listen for the correct address.

Expand All @@ -38,31 +38,31 @@ Clients listen out for server timing messages and, when they receive one, will s

The clients learn the correct "synchronisation point" from the server's messages, and just like the server, will signal the camera application at the same moment that it should start using the frames. So in the case of `rpicam-vid`, this is once again the moment at which frames will start being recorded.

Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "syncrhonisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started!
Normally it makes sense to start clients _before_ the server, as the clients will simply wait (the "synchronisation point" has not been reached) until a server is seen broadcasting onto the network. This obviously avoids timing problems where a server might reach its "synchronisation point" even before all the clients have been started!

**Usage in `rpicam-vid`**

We can use software camera synchronisation with `rpicam-vid` to record videos that are sychronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us.
We can use software camera synchronisation with `rpicam-vid` to record videos that are synchronised frame-by-frame. We're going to assume we have two cameras attached, and we're going to use camera 0 as the server, and camera 1 as the client. `rpicam-vid` defaults to a fixed 30 frames per second, which will be fine for us.

First we should start the client:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 1 --codec libav -o client.mp4 --sync client
----

Note the `--sync client` parameter. This will record for 20 seconds in total but note that this _includes_ the time to start the server and achieve synchronisation. So while the start of the recordings, and all the frames, will be synchronised, the end of the recordings is not.
Note the `--sync client` parameter. This will record for 20 seconds but _only_ once the synchronisation point has been reached. If necessary, it will wait indefinitely for the first server message.

To start the server:
[source,console]
----
$ rpicam-vid -n -t 20s --camera 0 --codec libav -o server.mp4 --sync server
----

This will run for 20 seconds but with the default settings (100 frames at 30fps) will give clients just over 3 seconds to get synchronised before anything is recorded. So the final video file will contain slightly under 17 seconds of video.
This too will run for 20 seconds counting from when the synchronisation point is reached and the recording starts. With the default synchronisation settings (100 frames at 30fps) this means there will be just over 3 seconds for clients to get synchronised.

The server's broadcast address and port, the frequency of the timing messages and the number of frames to wait for clients to synchronise, can all be changed in the camera tuning file. Clients only pay attention to the broadcast address here which should match the server's; the other information will be ignored. Please refer to the https://datasheets.raspberrypi.com/camera/raspberry-pi-camera-guide.pdf[Raspberry Pi Camera tuning guide] for more information.

In practical operation there are a few final points to be aware of:

* The fixed framerate needs to be below the maximum framerate at which the camera can operate (in the camera mode that is being used). This is because the synchronisation algorithm may need to _shorten_ camera frames so that clients can catch up with the server, and this will fail if it is already running as fast as it can.
* Whilst cameras frames should be correctly synchronised, at higher framerates, or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually easier simply to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues, or reducing system load (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option].)
* Whilst camera frames should be correctly synchronised, at higher framerates or depending on system load, it is possible for frames, either on the clients or server, to be dropped. In these cases the frame timestamps will help an application to work out what has happened, though it's usually simpler to try and avoid frame drops - perhaps by lowering the framerate, increasing the number of buffers being allocated to the camera queues (see the xref:camera_software.adoc#buffer-count[`--buffer-count` option]), or reducing system load.
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Raspberry Pi OS recognises the following overlays in `/boot/firmware/config.txt`

To use one of these overlays, you must disable automatic camera detection. To disable automatic detection, set `camera_auto_detect=0` in `/boot/firmware/config.txt`. If `config.txt` already contains a line assigning an `camera_auto_detect` value, change the value to `0`. Reboot your Raspberry Pi with `sudo reboot` to load your changes.

If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or CM4, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry PI camera modules, auto-detection will correctly identify all the cameras connected to your device.
If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify which one you are referring to by adding `,cam0` or `,cam1` (don't add any spaces) to the `dtoverlay` that you used from the table above. If you do not add either of these, it will default to checking camera connector 1 (`cam1`). But note that for official Raspberry Pi camera modules, auto-detection will correctly identify all the cameras connected to your device.

[[tuning-files]]
==== Tweak camera behaviour with tuning files
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -556,7 +556,7 @@ Post-processing is a large topic and admits the use of third-party software like
==== `buffer-count`
The number of buffers to allocate for still image capture or for video recording. The default value of zero lets each application choose a value for itself (1 for still image capture, and 6 for video recording). Increasing the number can sometimes help to reduce the number of frame drops, particularly at higher framerates.
The number of buffers to allocate for still image capture or for video recording. The default value of zero lets each application choose a reasonable number for its own use case (1 for still image capture, and 6 for video recording). Increasing the number can sometimes help to reduce the number of frame drops, particularly at higher framerates.
==== `viewfinder-buffer-count`
Expand Down
2 changes: 1 addition & 1 deletion documentation/asciidoc/computers/camera/rpicam_vid.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ $ ffplay test.h264

[WARNING]
====
Older versions of vlc used to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below).
Older versions of vlc were able to play H.264 files correctly, but recent versions do not - displaying only a few, or possibly garbled, frames. You should either use a different media player, or save your files in a more widely supported container format - such as MP4 (see below).
====

On Raspberry Pi 5, you can output to the MP4 container format directly by specifying the `mp4` file extension for your output file:
Expand Down
10 changes: 5 additions & 5 deletions documentation/asciidoc/computers/camera/streaming.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ To view video streamed over UDP using a Raspberry Pi as a client, use the follow
----
$ ffplay udp://@:<port> -fflags nobuffer -flags low_delay -framedrop
----
As noted previously, `vlc` no longer handles unencapsulated h264 streams.
As noted previously, `vlc` no longer handles unencapsulated H.264 streams.

In fact, support for unencapsulated h264 can generally be quite poor so it is often better to send an MPEG-2 Transport Stream instead. Making use of `libav`, this can be accomplished with:
In fact, support for unencapsulated H.264 can generally be quite poor so it is often better to send an MPEG-2 Transport Stream instead. Making use of `libav`, this can be accomplished with:

[source,console]
----
Expand All @@ -35,7 +35,7 @@ $ vlc udp://@:<port>

=== TCP

You can also stream video over TCP. As before, we can send an unencapsulated h264 stream over the network. To use a Raspberry Pi as a server:
You can also stream video over TCP. As before, we can send an unencapsulated H.264 stream over the network. To use a Raspberry Pi as a server:

[source,console]
----
Expand Down Expand Up @@ -65,7 +65,7 @@ $ vlc tcp://<ip-addr-of-server>:<port>

=== RTSP

We can use VLC as an RTSP server, however, we must send it an MPEG-2 Transport Stream as it no longer understands unencapsulated h264:
We can use VLC as an RTSP server, however, we must send it an MPEG-2 Transport Stream as it no longer understands unencapsulated H.264:

[source,console]
----
Expand Down Expand Up @@ -110,7 +110,7 @@ $ rpicam-vid -t 0 --codec libav --libav-format mpegts --libav-audio -o "udp://<

https://gstreamer.freedesktop.org/[GStreamer] is a Linux framework for reading, processing and playing multimedia files. We can also use it in conjunction with `rpicam-vid` for network streaming.

This setup uses `rpicam-vid` to output an encoded h.264 bitstream to stdout. As we've done previously, we're going to encapsulate this in an MPEG-2 Transport Stream for better downstream compatibility.
This setup uses `rpicam-vid` to output an H.264 bitstream to stdout, though as we've done previously, we're going to encapsulate it in an MPEG-2 Transport Stream for better downstream compatibility.

Then, we use the GStreamer `fdsrc` element to receive the bitstream, and extra GStreamer elements to send it over the network. On the server, run the following command to start the stream, replacing the `<ip-addr>` placeholder with the IP address of the client or multicast address and replacing the `<port>` placeholder with the port you would like to use for streaming:

Expand Down

0 comments on commit 9088335

Please sign in to comment.