Skip to content
Shawn Harrison edited this page Jan 27, 2017 · 29 revisions

← Previous topic: Camera Spectral Response | Next topic: Field-of-view Calculations →

Brands

Lens choice depends on camera. Check with camera manufacturer for the best lenses for your cameras.
Here are links to some frequently used lenses:
Fujinon c-mount high definition
Fujinon fixed focal length

Lens Type -- C vs. CS

When matching a lens to a camera, it is important to match the lens mounting system. There are two common lens mounts for machine or scientific vision systems. These are "C" and "CS". The main difference is that CS mounts are designed with the lens closer to the sensor. The mount diameter and thread are the same, so it is possible to put a CS mount lens on a C mount camera and vice versa. The danger comes in using a C mount lens with CS mount cameras. When focusing the lens, it is often possible that the lens elements will move back into and impact the sensor. To solve this problem, there are ring adapters that provide the necessary clearance, converting a CS mount camera into a C mount.

Vignetting

Vignetting occurs when the lens size is smaller than the chip on the camera. See the example of vignetting below.

Lens Distortion

The photogrammetric relationships used to convert between image and world coordinates or into other image configurations like rectifications are based on the perfect pin-hole camera model. However, all camera-lens combinations have some amount of lens distortion that must be compensated for. This is done through lens calibration. Figure 1 shows an example of an image with extreme barrel distortion collected using a DJI Phantom 2 (Vision Plus), typical also of many GoPro images.

Figure 1. Example snap from a Phantom 2 with a great deal of barrel distortion. Note how curved the horizon is. Distortion scales with distance from the image center so the pier looks straighter than the horizon because it is mostly closer to the image center.

An advanced and free software package for dealing with lens distortion has been created by Jean-Yves Bouguet of Caltech and can be found by googling ‘Caltech lens distortion’ or using the link https://www.vision.caltech.edu/bouguetj/calib_doc/index.html.

The package has extensive documentation. However we will repeat the basis of the algorithm here.

The Caltech toolbox performs the calibration of a camera/lens system based on viewing a checkerboard target from a suite of viewing angles (for example, Figure 2). Corners across the entire target are automatically detected based on user-clicked locations of four outside corners, always in the same order from the top-left determined in the first image. Chose these enclosing corners such that full squares lie on the outside, not just small fractional parts of squares.

Figure 2. Checkerboard target for camera calibration. Roughly 20 images should be collected from a wide range of viewing angles. The bolts in the center helped keep the target flat and did not interfere with the analysis.

The algorithm solves for both the distortion coefficients of the lens and the intrinsic parameters of the lens/camera combination. The latter includes: the focal length of the lens, fc in their terms, expressed in pixel units and having components in the x and y direction of the image; the image skewness, alpha_c, which characterizes the degree to which the image x and y axes are not quite orthogonal; and the coordinates of the principal point of the sensor, the effective location of the pinhole in the pinhole camera model.

While we normally measure image coordinates in pixels from the top left image corner, [U, V], distortion is based on normalized measures equivalent to tangent of the angle away from the focal point, i.e. x = (U-U0)/fU and y = (V-V0)/fV, where [U0,V0] are the pixel coordinates of the image center and [fU, fV] are the two components of the focal length, in pixels. If we denote undistorted (pinhole) coordinates as Xu = [xu; yu] and the distorted coordinates as Xd = [xd; yd], and we measure the radial distance of any point from the principal point as r, where r2 = xu^2+yu^2, then the distorted coordinates could be found as

(1)

where

(2)

Equation (2) represents the tangential correction, usually a negligible correction. Equation (1) adds this to the radial correction, the main part of lens distortion.

The five coefficients in the kc vector are listed as returned by the Caltech toolbox. Note that the three radial coefficients are the first, second and fifth components of the distortion vector. Often the tangential and the sixth order radial term (kc(5)) are neglected.

Once the Caltech toolbox has been downloaded it is run in matlab by typing calib_gui. The steps generally follow the flow in the calibration toolbox window and are well described in the toolbox documentation. We have loaded a demo set of 30 images here that you can use for testing the method. These are for the 4000 x 2250 (9 MPixels) snapshot mode for a Phantom 3 Professional quadcopter. The target checkerboard has 6 cm squares (from the toolbox, printed locally and mounted on stiff cardboard). The center squares have a pattern of three circles that allow you to identify target orientation in snapshots. For comparison, I found the following answers:

Focal Length: fc = [2311.82148 2305.68126] +/- [49.70365 9.44251]
Principal point: cc = [2009.26876 1148.64025] +/- [7.00181 96.34934]
Skew: alpha_c = [0.00000] +/- [0.00000] => angle of pixel axes = 90.00000 +/- 0.00000 degrees
Distortion: kc = [-0.01979 0.00799 0.00745 0.00177 0.00000] +/- [0.00497 0.01067 0.00122 0.00110 0.00000]
Pixel error: err = [0.74591 2.02399]

Implementation:

In the CIL, lens distortion data are stored in a database and are easily called. For our initial CIRN toolboxes, we have manually entered the information into an expected structure called a Lens Calibration Profile (name taken from structure from motion work). An example, makeLCPP3 is included in the UAV toolbox and corresponds to an example DJI Phantom Professional 3 (P3P) quadcopter (the same images as are included in this lens distortion tutorial). The P3P allows image collection in five different modes, each with a different image size, hence covering different areas of the imaging chip. Thus, an LCP file is needed for any imaging type that you plan to analyze (i.e. you need to run the Caltech analysis for each imaging type and keep track of the different results). The routine makeLCPP3 allows for two different imaging modes that I have used and makes the decision of which is used based on the size of the input image, NU (number of columns) and NV (number of rows).

An example of the structure is shown below (the 4K video mode from the P3P).

        lcp.NU = 3840;	            % number of pixel columns
        lcp.NV = 2160;             % number of pixel rows
        lcp.c0U = 1957.13;       % two components of principal point 
        lcp.c0V = 1088.21;
        lcp.fx = 2298.59;           % two components of focal length (in pixels)
        lcp.fy = 2310.87;
        lcp.d1 = -0.14185;        % radial distortion coefficients.  Same as kc(1, 3 and 5)
        lcp.d2 =  0.11168;
        lcp.d3 = 0.0;
        lcp.t1 = 0.00369;           % tangential distortion coefficients.  Same as kc(2, 4)
        lcp.t2 = 0.002314;
        lcp.r = 0:0.001:1.5;       % the rest is needed for processing distortion
        lcp = makeRadDist(lcp);
        lcp = makeTangDist(lcp);    % add tangential dist template

The final three lines compute functional forms of the distortion as a function of radius, lcp.r, to simplify the later computation of distortion. makeRadDist and makeTangDist pre-compute needed parts of the distortion equations above, saving results as part of the lcp structure. In fact, lcp.r is overwritten in makeRadDist.m so should be removed from makeLCPP3.m (test this).

Implementation of distortion removal (for geometry purposes) or un-distortion (to determine where computed pixel location should lie in a sampled image) are carried out using routines DJIDistort.m and DJIUndistort.m, located in the UAV toolbox. At this stage these names are overly specific to UAV work. This is because we need to better clarify distortion strategies in the CIL and in CIRN.

There is sometimes confusion about whether to use distort or undistort. For example, to create an undistorted rectification from an image, you end up using distort, not undistort. The reason is simple. The ideal rectification is designed as a set of world xyz points (usually a matrix of xy points with z set to 0 or tide level). The image intensities in this designed array are found by interpolation into an oblique image by first finding the corresponding location of the xyz point in terms of distorted UV coordinates.

Important Note Regarding Distortion and Lens Style

Some people have asked about the use of auto-iris lenses in Argus and other quantitative work. It would seem to be a valuable property of the lens so that images can be collected earlier and later in the day. Early Argus stations did use such lenses, and images were often collected long after sunset. The initial idea was that we could use fixed lights (streetlamps, e.g.) as references to improve geometries and detect camera motion.

Unfortunately, as the iris opens and closes, different parts of the lens are included or excluded from the image path. This has the (now obvious) result that the lens distortion CHANGES as the iris changes. For this reason, only fixed iris lenses should be used for quantitative work where distortion is to be taken into account, and any distortion calibration for a lens should be done using the f-stop that will be used in the field.

It is also important to note that through historical use, we have determined that the longer the focal length, the less the radial distortion.

Clone this wiki locally