-
Notifications
You must be signed in to change notification settings - Fork 4
Extrinsic Calib
The processing of finding R and C is called extrinsic calibration. This can only be done after camera installation and requires estimation of the three viewing angles (tilt, azimuth and roll) that make up R as well as the x,y,z location of the camera in C. This is a total of 6 unknowns.
It is possible to directly measure some or all of these unknowns. For instance, for fixed cameras located on a structure, it is usually possible to survey the camera location with sufficient accuracy to be useful, reducing the number of unknowns to 3. While it is impossible to place a survey instrument in the middle of a camera body, the location the equivalent "pin hole" would be, you can survey to the top of the camera and estimate the vertical offset between this and the axis of the lens. Experience has shown us that the equivalent "pin hole" location for lenses is approximately 1" toward the camera from the front surface of the lens.
It is much harder to measure the viewing angles for a camera to sufficient accuracy. To scale the problem, a 0.25 degree error in estimated viewing error looking 300 m from the camera using a 40 m tower would have a cross-shore error of 1.3 m and 10 m longshore error. Instead, these extrinsic calibration values are usually found using Ground Control Points (GCPs), points whose world coordinates are known by survey and whose image coordinates can be accurately digitized from an image. Combining equations (1) and (2) in the photogrammetry section and applying these to a set of such points, the only unknowns will be the 6 camera parameters so these can be found by a standard nonlinear solver (comparing measured and predicted image coordinates for a guess at the 6 unknowns then searching for their optimum values that minimize the squares of their differences). These values should be stored in a database or file. For a fixed camera, these need only, in principle, be found once. However, our experience show that no camera is truly fixed but moves slightly due to wind, thermal effects or structure aging (we used to be able to distinguish sunny from cloudy days at Duck by variations in the camera tilt). Thus users should be sensitive to geometry variations at a station, especially when newly installed.
At the CIL, the solution for extrinsic variables is carried out using a routine called geomTool6.m that makes use of many CIL database conventions. For the UAV toolbox, a standalone example of the solution is found in the routine initUAVAnalysis on line 31. The vector of extrinsic unknowns is often called beta. At its heart, extrinsic calibration can be executed using the matlab call nlinfit, a general least squares fitting routine, and is of the form
beta = nlinfit(xyz,UV,'findUV',beta0);
The inputs are a list of the xyz locations of the GCPS, their corresponding UV location and beta0, an initial guess at the extrinsic parameters (up to 6 degrees of freedom). findUV is a routine that computes test UV coordinates for any guess of beta. By comparing the test and measured UV coordinates, the optimum parameters, beta, can be found iteratively as the vector of unknowns that minimize the square difference between test and measured. Note that the above is a simplified version of the call. Check the UAV toolbox as noted above for an example functional version.
Since there are 6 unknowns, we need at least 6 knowns for a solution. Each control point contributes 2 values (U and V coordinates) so at least three points are needed. We prefer to be over-determined so will use at least four points in the following tests.
For terrestrial applications it is typically easy to find or place an abundance of GCPs throughout the view to allow solution of camera extrinsic geometries. This is at the heart of Structure from Motion algorithms like those from Agisoft or Pix4D. However, surf zone images usually contain only a minimum amount of land by design, so GCP options are often limited and poorly distributed over the image, often lying in a line along the dune crest, a configuration that makes the inverse solution ill-posed. For these cases, common for nearshore studies, we must rely on alternate sources of information to reduce the number of degrees of freedom and the requirements on GCP layout.
For a fixed station, the locations of each camera are usually known by survey so that only the 3 rotation angles are still unknown. These can be solved by nonlinear fit to at least two, but preferably at least 3-4 GCPs should be used. If the roll of the camera can be acceptable assumed to be fixed, this leaves only 2 remaining degrees of freedom. It is possible to find these with only one GCP, as discussed in Holman, Brodie and Spore [2017] although errors average around 10 m and can be large in the far field.
In some cases, certain geometry parameters may be known but only approximately. For instance, UAV platforms may record GPS position as metadata, which can be considered accurate to within +/- several meters. In principle this information can also be used to constrain/regularize the geometry solution, as a Bayesian-type inversion. This has been implemented for testing purposes in the geometry demo of the UAV-Processing-Toolbox.
CIRN
Wiki Home
CIRN Website
CIRN Research and Workshops
CIRN Monthly Webinars Existing Monitoring Stations
Sampling Goals
Pixel Resolution
Fixed Mounting Platforms
Temporary Towers
FOV and Lenses
Installation Design
Cookbook
Data Processing
Understanding Image Geometries
Photogrammetry
Intrinsic Calibration
GCPs
Extrinsic Calibration
File Naming
Directory Naming
Time Conventions
Common Variable Names
Parallel Processing Issues
Etc
GitHub Help
GitHub Cheat Sheet
CIRN repository help
GitHub Best Practices and GuidelinesGitHub Repository Structure
GitHub Workflow Overview
Using Teams & Roles
Issues
Testing & Review
Code Requirements
General Guidance
Admin
Software Development Life Cycle