Skip to content

Triggers

Shawn Harrison edited this page Apr 14, 2017 · 2 revisions

Next topic: Making A Trigger Generator →

Camera Triggering

For general image viewing, it is not important if all the cameras at a site take images at exactly the same time. However, for quantitative measurements crossing camera image boundaries it is important that images are collected together. For many intra-camera data collections (cBathy stacks are a prime example) the image timing is also critical. Nothing is ever perfect, so an understanding of the timing errors involved in triggering cameras is also necessary.

Different camera manufacturers provide different capabilities in this regard, and it is important to review the specifications to know if the cameras will be usable for Argus collections. There are three main methods of prodding a camera to collect an image, of which two are useful.

Software triggering

This method requires the host computer to send a signal to the camera instructing it to collect an image. The downside to this method is that if the host is busy and cannot send the trigger at the right time, images can be lost and time series data damaged.The upside is that with proper programming it is possible to synchronize images with a real world time if necessary. I.e., collect a frame exactly on the second.

Internal triggering

Most cameras can be configured to provide frame on a regular basis, using an internal timing system. For example, FireWire cameras that follow the IIDC standard will have frame rates that are factors of two starting at 1.875fps. E.g., 3.75fps, 7.5fps, 15fps, etc. Extensions to the IIDC standard also allow setting arbitrary frame rates, such as 2fps (a standard Argus collection rate). Internal triggering is usually very accurate since it is based on good accuracy crystal oscillators necessary for controlling the image sensor and other data flow.

However, internal triggering does not necessarily synchronize inter-camera collection. Point Grey FireWire cameras do synchronize their collections to the FireWire bus clock, and thus are synchronized with each other. (Associative property.)

If you are using one camera, internal triggering is the simplest method of controlling image collection.

External triggering

Many scientific and industrial cameras provide a method of external triggering the camera. I.e., an input signal of the appropriate form will cause the camera to collect an image. Point Grey cameras have a GPIO connector on the rear (in many different styles depending on the specific model) with at least one pin programmable as an external trigger input. These cameras typically use a 5V pulse, active low, as a trigger signal, but it is the transition that is measured so the actual duty cycle of the pulse signal is not important.

Generating the trigger signal can be as easy as programming an Arduino or other microcontroller to generate pulses on one or more output pins, or wiring a simple NE555 (integrated circuit) oscillator to do the same. The great advantage of the microcontroller solution is that it could be programmed to accept commands from the host computer to change the trigger rate, or send a signal to the host when a trigger pulse is sent. The latter allows the host to know what time the trigger was sent; for some bus topologies (GigE, e.g.) there is no inherent time information in the image data and only the time of the incoming image can be determined.

Point Grey cameras also have a programmable GPIO output signal that can be set to pulse whenever the camera has taken a frame. This would allow using one camera in internal trigger mode to trigger the other cameras at a site. This creates a situation where the failure of one camera (the trigger source) stops collection from all cameras. For that reason, the CIL Argus stations have never used this method, relying instead on external triggering of all cameras from the same microcontroller, or the self-synchronization of FireWire cameras using internal triggering mode.

Image Timing

It is impossible to talk about triggering cameras to produce images without also mentioning the issue of frame timing. Part and parcel of when you trigger the images is when you want to have triggered them and when they show up on the host computer.

If you are running one camera using internal triggering, it is almost certainly sufficient to rely on the host system clock to know collection start time and use the nominal camera frame rate for inter-frame time measurements. For the most accurate measurements using stack or other time-series collections, it is possible to measure the actual frame rate for a camera in the laboratory and then assume that camera temperature at the site will have little effect. This also holds true for any synchronized multi-camera internal triggering system (like Point Grey uses.)

Doing this, however, means you will not be able to detect any missing frames, which can happen for any number of reasons. If it is possible to have time information for each image, you can at least detect and possibly correct in later processing any such losses. There are several ways of doing this.

Incoming Frame Time

If your software supports this ("hotm" from CIL does) you can record the time the frame arrives at the host collection software. A simple "getFrame; getTime" call accomplishes this. This is the least accurate method since the time will depend on any software or hardware delays in the system. If you can avoid this method, do so.

Incoming Frame Time 2

Some libraries implement a low-level frame arrival time measurement. Libdc1384 (FireWire) does this. Each frame retrieved from the camera has an associated time of arrival of the last packet of data from the camera. The advantage of this method is that it occurs at the device driver level and is thus less dependent on program swapping or multi-user scheduling.

Embedded Frame Time

Point Grey cameras, in particular, can embed a lot of information about each frame within the frame itself. This appears as binary data in the first up-to 32 bytes of the image. (This first line of the image is so close to the edge as to be useless, and is covered in the Argus images created by hotm by other information about the image in human-readable format.) Each parameter returned takes 4 bytes, and by default all 8 possible return values are enabled so that the frame time appears in the same place in every image. The exposure and gain values are in camera-dependent binary encoded format, but the frame time appears in a bit-encoded 32-bit word. Each frame time includes:

  • bus second -- a value from 0 to 127
  • bus cycle -- a value from 0 to 7999 (8000 cycles per second)
  • bus offset -- a value from 0 through ??? (fraction of a cycle)

For FireWire cameras, this value is the FireWire bus clock time when the image was initiated. There is a library function that will return the FireWire bus clock and system clock time at the same point, so it is relatively trivial to convert the embedded FireWire bus time to real-world time.

For GigE cameras, the embedded frame time is based on the camera's internal clock, and since there is no synchronization of these clocks with the host or other cameras, this time value is useful only to determine the difference in time between subsequent frames. This does, however, allow you to determine when a frame (or two, or three...) has been lost.

Danger

When using FireWire cameras in quantities that require more than one FireWire bus, it is important to remember that the bus time on a FireWire bus is not synchronized to any other FireWire bus. For example, Point Grey sells a PCI FireWire adapter that has two buses. Unfortunately they did not use a single oscillator (crystal) to run both bus interfaces, and thus one FireWire card has two buses that drift in time relative to each other. This means that cameras on different buses will not self-synchronize and time-series collections across a multiple camera system may have significant time errors between cameras.

Next topic: Making A Trigger Generator →

Clone this wiki locally