Skip to content

Installation Design

Shawn Harrison edited this page Mar 25, 2017 · 13 revisions

← Previous topic: FOV and Lenses

Installation Design

The primary goal of an installation design is to select appropriate cameras and lenses to yield adequate pixel resolution over a design world space. Usually the target sampling space has cross-shore and longshore extents and there may be a worst-case required pixel resolution. Since camera resolution degrades with range, designing for a minimum resolution at the furthest range implies that near ranges will be highly over-sampled. Pixel tools are based on the assumption that most analyses do not require retention of all of the sampled pixels, so only data from the required pixels are retained. Pixels tools are described [elsewhere].

The steps in an installation design are 1) define a local coordinate system and the region to be sampled in local coordinates, 2) determine the camera location in that coordinate system, 3) choose a specific type of camera, and 4) choose specific lenses. The process is often iterative, as different options are explored.

A matlab routine ArgusDesignDemo.m is included under Support-Routines, which goes through the steps of an installation design. It is described in further detail below.

1. Local Coordinate System

This is really a personal choice. For Argus installations, we have always defined a local coordinate system in meters with the positive x axis pointing offshore, positive z oriented up, and positive y to the left as you look out to sea (forming a right-hand coordinate system). We usually choose the origin to be some convenient local location, a benchmark, or even the location of one of the cameras. Once chosen, all analyses will be expressed in these units. It is useful to have a routine to convert from local to geographic coordinates. In the CIL we have our own routines called Argus2LatLong etc. It can be useful to define your vertical datum to be compatible with tidal data to allow simpler rectification and tidal corrections.

Some people choose to use geographic coordinates directly, usually local UTM coordinates. However, numbers are usually of order 10^6 so not convenient for labeling axes or for anything really. Also, some nonlinear search routines in Argus analyses will lose accuracy if large x-y coordinates are used. You should use lat-long values since these are not a rectilinear system.

Having defined the coordinate system, you can then define the sampling region of interest. On a straight beach this would usually be roughly symmetric in the longshore direction about the camera location and would extend from a shoreward limit (perhaps the dunes) to some offshore limit well outside the surf zone.

As an example, the coordinate system at the Field Research Facility at Duck, NC, was defined when the site was established and has the x origin well behind the dune and the y origin at the south property line such that all areas of interest had positive x and y coordinates. The pier at the center of the property is roughly at y = 500 m and a sensible area of interest is y values between 0 and 1000 m and x values from 50 (roughly the dune crest) to 500 m. With improved cameras, we have recently designed a span of 750 m on either side of the pier and an offshore limit of 800 m.

2. Camera Location

The criteria for selecting an appropriate camera location are discussed in [Fixed Mounting Platforms](Mounting Platforms). For site design and the creation of a resolution map for planning, you will mainly need to decide on the planned 3D location of the installed cameras, measured in local coordinates.

3. Camera Selection.

There are several factors in choosing a camera. The first is the protocol. The CIL has used FireWire and GigE cameras recently but each change of protocol has required a large investment in reprogramming our collection software. Typically we (well, mostly John Stanley) find that while cameras generally follow the operations listed in manuals, running a suite of six of so cameras in a synchronized mode for long durations tends to reveal many idiosyncrasies. Users should choose a camera type that allows robust and reliable collection. A discussion of camera choices is contained here.

Perhaps the most important factor is the number of pixels in the sensor (or the number of MegaPixels listed by the manufacturer). In the photogrammetry discussions these are described by NU and NV, the number of columns and rows in each image. Obviously the greater the number of pixels, the better the resolution of each camera (recalling that a lens of comparable quality is required). In some cases, the best way to achieve enough resolution for very distant views (far down a beach) is to use high res cameras.

The physical size of the camera and its power requirements may also factor into decisions. Cameras have become very small, so require only small housings and installation footprint. Recall that the smaller the chip size (not the camera size), the small the lens focal length that is required for any desired field of view. At times the available choices can be limited.

4 Selection of Lenses and Number of Cameras

The task now is to make decisions on the required number of cameras and also the selection of lenses. This is primarily based on an estimate of required pixel resolution and span of sampling. A matlab routine ArgusDesignDemo.m is included under Support Routines to use as a guide and is described below.

The process starts with defining the study region of interest as a vector [xmin xmax ymin ymax dx dy zLevel]. The domain boundaries are described by x and y min and max values while the projection surface of interest is located at zLevel. This would usually be 0.0, mean tide level, or some equivalent measure. dx and dy are simply the spatial resolution of the resolution map and have no lasting value. i.e. if dx = 10, resolution estimates are made for every 10 m of the spatial domain in x. Following this, the left azimuth of sampling is entered as leftAz. For a straight beach this would usually be 0 degrees, aligned with the alongshore direction to the left (i.e. the y-axis).

The description of the cameras starts with the chip size, NU and NV, the number of pixels in the chip width and height. Then the horizontal field of view and vertical tilt are entered as arrays, i.e. one value for each camera. The routine determines the number of cameras by the number of field of view values entered. The vertical tilt is expressed in terms of the tilt of the top of the field of view. Usually this might be chosen as 91 degrees, i.e. just above the horizon so that the horizon is included in the view.

Finally an overlap value is entered to describe the fraction of the horizontal field of view that overlaps the camera to the left. The default is 5%, so there will be 5% overlap between cameras to ensure continuity of view.

The analysis proceeds from left to right. So the first camera is aligned such that its left edge aligns with leftAz, the azimuth of the leftmost region of interest. The azimuth of each subsequent camera is chosen to abut with the previous but with a user-selected overlap. The region covered by the cameras continues until you run out of cameras. That may or may not be a full 180 degrees (if this was your intent). So you may need to change the number of cameras or their lenses to fill your sampling requirements.

The routine outputs a list of the azimuths, tilts and fields of view of your design as well as a sum of azimuths (without compensating for overlap). It also provides a plot (Figure 1) of the resolutions expressed in cross-shore as well as alongshore components. On this figure, the left and right edges of each camera’s view are indicated using matching colors (for left and right edges of each camera). The overlap can be seen. Not that edges that lie along the beach may plot in the negative sense, i.e. a left edge of camera 1 may aim slightly landward so get mapped in the negative direction.

Figure 1. Resolution maps for a hypothetical site showing the cross-shore (left) and alongshore (right) resolution maps. Matched color dashed line pairs correspond to the left and right edges of view for each camera.

← Previous topic: FOV and Lenses

Clone this wiki locally