Skip to content

Image processing

Dominic Ford edited this page Dec 22, 2018 · 5 revisions

Handling images from our cameras

This page describes how each Pi Gazing observatory analyses the video it receives. You do not need to understand these details in order to use the software, but they are likely to be of interest if you plan to start tweaking the software.

Analogue CCTV cameras such as the Watec 902H2 Ultimate that we use produce a stream of video frames at a rate of 25 per second. This is converted into a webcam-like device by a USB dongle -- the Easycap digitiser.

The Pi Gazing software needs to analyse this video stream in real time, and on a Raspberry Pi this means that processing time is extremely tight. Whereas most of the Pi Gazing control software is written in Python, the video analysis is written in optimised C code.

Output data

Each Pi Gazing camera produces two streams of observations. It takes long-exposure 30-second exposures once every 30 seconds. It is also motion sensitive, and records a video clip whenever a moving object is seen.

The motion sensor needs to be extremely selective about what it triggers on. Throughout the night, stars twinkle, clouds move, and other patterns of light recur many times. If the camera triggered on all of these events, it would be very hard to pick out the good events from the junk.

Such distracting sources commonly include:

  • Stars twinkling. Stars, especially near the horizon, tend to brighten and fade due to heat haze in the Earth's atmosphere.

  • House lights. When a neighbour visits the bathroom at 3am, their bathroom light may well illuminate houses and trees in the field of view.

  • Car headlights. Like house lights, these reflect off buildings and trees, and appear to move.

  • Thermal noise. There can be a lot of fuzz in some of the images, and it's important to separate out real changes in brightness from random noise.

  • Video artifacts. Sometimes the Easycap video digitiser dongle doesn't work quite perfectly. For example, it may put a bright grey streak across an image, like a poorly tuned analogue TV.

How Pi Gazing works

The video from the camera is read into a rolling buffer, around 30 seconds long.

Once every 30 seconds, we make an estimate of the amplitude of the noise level in the input video stream. This is done by studying every 499th pixel from top to bottom of the image, in the first 16 frames in the rolling buffer. Assuming nothing is moving, the pixel values ought to be the same in all 16 frames, and any variation is purely due to noise. In practice, something might move (e.g. a tree blowing in the wind), but we sample enough pixels from all across the image that this should average out.

To get an estimate of the noise level, the standard deviation of each pixel's value in those frames calculated. These standard deviations are then averaged.

We then begin looking for moving objects. The frames are dividing into consecutive groups of 1-3 frames (set by the variable TRIGGER_FRAMEGROUP). These are stacked (i.e. averaged) together to reduce the noise. The stacked image is then compared against the stack produced a few iterations previously (set by the variable STACK_COMPARISON_INTERVAL). A search is then made for pixels which satisfy the following criteria:

  • The pixel must have brightened by several standard deviations between the two stacks.

  • The pixel must be part of a reasonably compact bright splodge. It must be significantly brighter than most other pixels around it.

A map is made of such pixels, and each pixel keeps a counter of how many times it has met the criteria above. A search is made for groups of more than 10 connected pixels which all satisfy the above, but pixels are excluded from the counting if they have previously triggered more than twice as often as the average pixel. This excludes trees, houses, etc, which don't move but change brightness lots.

Groups are also excluded if they span fewer than three lines of the image, so that line artifacts are ignored.

Once a group of 10 or more connected pixels is found, its centroid is calculated, and it is registered as a bright object detection.

A registry is kept of the bright objects which are currently being tracked. If this new detection is within 100 pixels of an object currently being tracked, it is assumed to be the same object, and is added to a catalogue of sightings of that object.

Otherwise, the detection is taken to be of a new object.

Objects remain in the registry until either (a) they have been there for 30 seconds (we won't be able to make a video of the object unless we dump it now, as the rolling buffer is about to overwrite itself), or (b) it's not been sighted for 2 seconds.

When an object is purged from the registry, it is discarded if it was only seen in a single frame. Otherwise, a video is generated and other metadata stored.

Still frames

In addition to meteor hunting, the cameras also record still frames throughout the night. Presently these are taken once every 30 seconds, for 29 seconds.

This is set up so that there is almost always a time lapse exposure in progress -- thus if users want to make star trial diagrams by stacking the images together, they'll get continuous lines marking the path of each star, without gaps between the exposures.

These still exposures of semantic type pigazing:timelapse/lensCorr are made simply by averaging video frames over the duration of the exposure. Barrel correction is applied to the image.

Still exposures of type pigazing:timelapse/bgrdSub/lensCorr are made by subtracting an estimate of the sky background from each pixel, and then applying a gain of 5x to the image. The sky background is estimated to be the modal (i.e. most common) brightness of each pixel over the preceding 20 minutes. The statistical metric is used because it kills hot pixels, but is insensitive to the fact that some pixels have had stars pass through them for a couple of minutes. It does kill Polaris, though.