stationary_camera_self_calibration — Perform a self-calibration of a stationary projective camera.
stationary_camera_self_calibration( : : NumImages, ImageWidth, ImageHeight, ReferenceImage, MappingSource, MappingDest, HomMatrices2D, Rows1, Cols1, Rows2, Cols2, NumCorrespondences, EstimationMethod, CameraModel, FixedCameraParams : CameraMatrices, Kappa, RotationMatrices, X, Y, Z, Error)
stationary_camera_self_calibration performs a self-calibration of a stationary projective camera. Here, stationary means that the camera may only rotate around the optical center and may zoom. Hence, the optical center may not move. Projective means that the camera model is a pinhole camera that can be described by a projective 3D-2D transformation. In particular, radial distortions can only be modeled for cameras with constant parameters. If the lens exhibits significant radial distortions they should be removed, at least approximately, with change_radial_distortion_image.
The camera model being used can be described as follows:
x = P * X .Here, x is a homogeneous 2D vector, X a homogeneous 3D vector, and P a homogeneous 3x4 projection matrix. The projection matrix P can be decomposed as follows:
Since the camera is stationary, it can be assumed that t=0. With this convention, it is easy to see that the fourth coordinate of the homogeneous 3D vector X has no influence on the position of the projected 3D point. Consequently, the fourth coordinate can be set to 0, and it can be seen that X can be regarded as a point at infinity, and hence represents a direction in 3D. With this convention, the fourth coordinate of X can be omitted, and hence X can be regarded as inhomogeneous 3D vector which can only be determined up to scale since it represents a direction. With this, the above projection equation can be written as follows:
From the above equation, constraints on the camera parameters can be derived in two ways. First, the rotation can be eliminated from the above equation, leading to equations that relate the camera matrices with the projective 2D transformation between the two images. Let be the projective transformation from image i to image j. Then,
In each of the three methods the camera parameters that should be computed can be specified. The remaining parameters are set to a constant value. Which parameters should be computed is determined with the parameter CameraModel which contains a tuple of values. CameraModel must always contain the value 'focus' that specifies that the focal length f is computed. If CameraModel contains the value 'principal_point' the principal point (u,v) of the camera is computed. If not, the principal point is set to (ImageWidth/2,ImageHeight/2). If CameraModel contains the value 'aspect' the aspect ratio a of the pixels is determined, otherwise it is set to 1. If CameraModel contains the value 'skew' the skew of the image axes is determined, otherwise it is set to 0. Only the following combinations of the parameters are allowed: 'focus', ['focus', 'principal_point'], ['focus', 'aspect'], ['focus', 'principal_point', 'aspect'] and ['focus', 'principal_point', 'aspect', 'skew'].
Additionally, it is possible to determine the parameter Kappa, which models radial lens distortions, if EstimationMethod = 'gold_standard' has been selected. In this case, 'kappa' can also be included in the parameter CameraModel. Kappa corresponds to the radial distortion parameter of the division model for lens distortions (see camera_calibration).
When using EstimationMethod = 'gold_standard' to determine the principal point, it is possible to penalize estimations far away from the image center. This can be done by adding a sigma to the parameter 'principal_point:0.5'. If no sigma is given the penalty term in the above equation for calculating the error is omitted.
The parameter FixedCameraParams determines whether the camera parameters can change in each image or whether they should be assumed constant for all images. To calibrate a camera so that it can later be used for measuring with the calibrated camera, only FixedCameraParams = 'true' is useful. The mode FixedCameraParams = 'false' is mainly useful to compute spherical mosaics with gen_spherical_mosaic if the camera zoomed or if the focus changed significantly when the mosaic images were taken. If a mosaic with constant camera parameters should be computed, of course FixedCameraParams = 'true' should be used. It should be noted that for FixedCameraParams = 'false' the camera calibration problem is determined very badly, especially for long focal lengths. In these cases, often only the focal length can be determined. Therefore, it may be necessary to use CameraModel = 'focus' or to constrain the position of the principal point by using a small Sigma for the penalty term for the principal point.
The number of images that are used for the calibration is passed in NumImages. Based on the number of images, several constraints for the camera model must be observed. If only two images are used, even under the assumption of constant parameters not all camera parameters can be determined. In this case, the skew of the image axes should be set to 0 by not adding 'skew' to CameraModel. If FixedCameraParams = 'false' is used, the full set of camera parameters can never be determined, no matter how many images are used. In this case, the skew should be set to 0 as well. Furthermore, it should be noted that the aspect ratio can only be determined accurately if at least one image is rotated around the optical axis (the z axis of the camera coordinate system) with respect to the other images. If this is not the case the computation of the aspect ratio should be suppressed by not adding 'aspect' to CameraModel.
As described above, to calibrate the camera it is necessary that the projective transformation for each overlapping image pair is determined with proj_match_points_ransac. For example, for a 2x2 block of images in the following layout
the following projective transformations should be determined, assuming that all images overlap each other: 1->2, 1->3, 1->4, 2->3, 2->4 and 3->4. The indices of the images that determine the respective transformation are given by MappingSource and MappingDest. The indices are start at 1. Consequently, in the above example MappingSource = [1,1,1,2,2,3] and MappingDest = [2,3,4,3,4,4] must be used. The number of images in the mosaic is given by NumImages. It is used to check whether each image can be reached by a chain of transformations. The index of the reference image is given by ReferenceImage. On output, this image has the identity matrix as its transformation matrix.
The 3x3 projective transformation matrices that correspond to the image pairs are passed in HomMatrices2D. Additionally, the coordinates of the matched point pairs in the image pairs must be passed in Rows1, Cols1, Rows2, and Cols2. They can be determined from the output of proj_match_points_ransac with tuple_select or with the HDevelop function subset. To enable stationary_camera_self_calibration to determine which point pair belongs to which image pair, NumCorrespondences must contain the number of found point matches for each image pair.
The computed camera matrices are returned in CameraMatrices as 3x3 matrices. For FixedCameraParams = 'false', NumImages matrices are returned. Since for FixedCameraParams = 'true' all camera matrices are identical, a single camera matrix is returned in this case. The computed rotations are returned in RotationMatrices as 3x3 matrices. RotationMatrices always contains NumImages matrices.
If EstimationMethod = 'gold_standard' is used, (X, Y, Z) contains the reconstructed directions Xj. In addition, Error contains the average projection error of the reconstructed directions. This can be used to check whether the optimization has converged to useful values.
If the computed camera parameters are used to project 3D points or 3D directions into the image i the respective camera matrix should be multiplied with the corresponding rotation matrix (with hom_mat2d_compose).
Number of different images that are used for the calibration.
Restriction: NumImages >= 2
Width of the images from which the points were extracted.
Restriction: ImageWidth > 0
Height of the images from which the points were extracted.
Restriction: ImageHeight > 0
Index of the reference image.
Indices of the source images of the transformations.
Indices of the target images of the transformations.
Array of 3x3 projective transformation matrices.
Row coordinates of corresponding points in the respective source images.
Column coordinates of corresponding points in the respective source images.
Row coordinates of corresponding points in the respective destination images.
Column coordinates of corresponding points in the respective destination images.
Number of point correspondences in the respective image pair.
Estimation algorithm for the calibration.
Default value: 'gold_standard'
List of values: 'gold_standard', 'linear', 'nonlinear'
Camera model to be used.
Default value: ['focus','principal_point']
List of values: 'aspect', 'focus', 'kappa', 'principal_point', 'skew'
Are the camera parameters identical for all images?
Default value: 'true'
List of values: 'false', 'true'
(Array of) 3x3 projective camera matrices that determine the internal camera parameters.
Radial distortion of the camera.
Array of 3x3 transformation matrices that determine rotation of the camera in the respective image.
X-Component of the direction vector of each point if EstimationMethod = 'gold_standard' is used.
Y-Component of the direction vector of each point if EstimationMethod = 'gold_standard' is used.
Z-Component of the direction vector of each point if EstimationMethod = 'gold_standard' is used.
Average error per reconstructed point if EstimationMethod = 'gold_standard' is used.
* Assume that Images contains four images in the layout given in the * above description. Then the following example performs the camera * self-calibration using these four images. From := [1,1,1,2,2,3] To := [2,3,4,3,4,4] HomMatrices2D :=  Rows1 :=  Cols1 :=  Rows2 :=  Cols2 :=  NumMatches :=  for J := 0 to |From|-1 by 1 select_obj (Images, ImageF, From[J]) select_obj (Images, ImageT, To[J]) points_foerstner (ImageF, 1, 2, 3, 100, 0.1, 'gauss', 'true', \ RowsF, ColsF, _, _, _, _, _, _, _, _) points_foerstner (ImageT, 1, 2, 3, 100, 0.1, 'gauss', 'true', \ RowsT, ColsT, _, _, _, _, _, _, _, _) proj_match_points_ransac (ImageF, ImageT, RowsF, ColsF, RowsT, ColsT, \ 'ncc', 10, 0, 0, 480, 640, 0, 0.5, \ 'gold_standard', 2, 42, HomMat2D, \ Points1, Points2) HomMatrices2D := [HomMatrices2D,HomMat2D] Rows1 := [Rows1,subset(RowsF,Points1)] Cols1 := [Cols1,subset(ColsF,Points1)] Rows2 := [Rows2,subset(RowsT,Points2)] Cols2 := [Cols2,subset(ColsT,Points2)] NumMatches := [NumMatches,|Points1|] endfor stationary_camera_self_calibration (4, 640, 480, 1, From, To, \ HomMatrices2D, Rows1, Cols1, \ Rows2, Cols2, NumMatches, \ 'gold_standard', \ ['focus','principal_point'], \ 'true', CameraMatrix, Kappa, \ RotationMatrices, X, Y, Z, Error)
If the parameters are valid, the operator stationary_camera_self_calibration returns the value 2 (H_MSG_TRUE). If necessary an exception is raised.
Lourdes Agapito, E. Hayman, I. Reid: “Self-Calibration of Rotating and Zooming Cameras”; International Journal of Computer Vision; vol. 45, no. 2; pp. 107--127; 2001.