-
Notifications
You must be signed in to change notification settings - Fork 4
GCPs
← Previous topic: Intrinsic Calibration | Next topic: Extrinsic Calibration →
Ground Control Points, or GCPs, are points whose world coordinates are known by survey and whose image coordinates can be accurately digitized from an image. These are required for extrinsic calibration of an installed camera.
They can be any identifiable feature, for example the intersection of white pavement markings or a manhole cover. Or they can be targets that have been added to the scene for just this purpose, for example the checkerboard patterns in Figure 1.
Figure 1. Example checkerboard GCP on the FRF pier. The target is about 1 m square in order to be visible from high-flying UAVs. The cement block keeps the target from blowing away in the wind and should not cover the target center.
The world location of each GCP, [x,y,z], must be found by survey in local coordinates, at a target point that can easily be identifiable in the image. For the checkerboard, that location would typically intersection of the border lines. For a manhole cover it would likely be the geometric center. The corresponding image location of each GCP, [U,V], must also be determined, often by zooming in and clicking in the image. It should be realized that pixel resolution is often around 10 cm, so both surveys and image location picking need to be to an accuracy that is commensurate with pixel accuracy or required accuracy of subsequent analysis methods. Often it is desirable to identify GCP locations to better than an accuracy of one pixel.
Accurate image coordinates can often be found using a center of mass (COM) calculation. If the target is visually identifiable within a local region by being either brighter or darker than the surroundings, the GCP pixels can be found by thresholding and the center of mass found (the mean of the U and V locations of all the thresholded pixels). This method is usually accurate to a fraction of a single pixel, often called sub-pixel accuracy. This is the method used in the UAV toolbox in the routine called findCOMRefObj.m (in the first 30 lines of code).
Often none of the surveyed control points are visually distinct such that they can be automatically recognized. However other objects might be in view that could be automatically identifiable but for which we have no survey coordinates. This limitation can be solved by finding world locations that would be equivalent to that which would be surveyed, if you could have surveyed them, i.e. we can create a “virtual GCP” whose world position is equivalent to it’s true survey location.
The process is straightforward. The camera geometry is solved using GCPs whose survey and image locations can be found manually. If we later wish to use a virtual GCP, say a white sign that is plainly visible but was not surveyed, then we start by finding its [U,V] location. We cannot solve for the equivalent full x,y,z directly since we would have three unknowns but only two knows (U,V). But we can arbitrarily chose a value for one coordinate, for example we might ask for the location of a point whose z location was zero (where would this point be if it were located at sea level?) or perhaps at some guess at a longshore location. Having a correct guess is unimportant. Knowing the [U,V] locations and the assumed vertical location, we can solve for the equivalent [x,y] locations. Then the computed [x,y,z] location would be equivalent to that which we would have surveyed (i.e. it would be in the identical direction, only likely further away from the camera).
This process is used in the UAV toolbox. A set of reference objects (small user-defined search windows) are established in the routine initUAVAnalysis. These are defined by a user-defined bounding box, a threshold intensity (pixels brighter than this are kept), and an equivalent xyz location. The xyz location is found by arbitrarily assuming z = 0 and solving for [x,y] using the manually computed geometry for the first frame.
The advantage is that these reference objects can be found automatically for every frame after the first, so no further manual action is required in analyzing a long UAV video whose aim point wanders slightly.
← Previous topic: Intrinsic Calibration | Next topic: Extrinsic Calibration →
CIRN
Wiki Home
CIRN Website
CIRN Research and Workshops
CIRN Monthly Webinars Existing Monitoring Stations
Sampling Goals
Pixel Resolution
Fixed Mounting Platforms
Temporary Towers
FOV and Lenses
Installation Design
Cookbook
Data Processing
Understanding Image Geometries
Photogrammetry
Intrinsic Calibration
GCPs
Extrinsic Calibration
File Naming
Directory Naming
Time Conventions
Common Variable Names
Parallel Processing Issues
Etc
GitHub Help
GitHub Cheat Sheet
CIRN repository help
GitHub Best Practices and GuidelinesGitHub Repository Structure
GitHub Workflow Overview
Using Teams & Roles
Issues
Testing & Review
Code Requirements
General Guidance
Admin
Software Development Life Cycle