proj_match_points_distortion_ransac_guided — Compute a projective transformation matrix and the radial distortion coefficient between two images by finding correspondences between points based on known approximations of the projective transformation matrix and the radial distortion coefficient.
proj_match_points_distortion_ransac_guided(Image1, Image2 : : Rows1, Cols1, Rows2, Cols2, GrayMatchMethod, MaskSize, HomMat2DGuide, KappaGuide, DistanceTolerance, MatchThreshold, EstimationMethod, DistanceThreshold, RandSeed : HomMat2D, Kappa, Error, Points1, Points2)
Given a set of coordinates of characteristic points (Rows1,Cols1) and (Rows2Cols2) in both input images Image1 and Image2, which must have identical size, and given known approximations HomMat2DGuide and KappaGuide for the transformation matrix and the radial distortion coefficient between Image1 and Image2, proj_match_points_distortion_ransac_guided automatically determines corresponding points, the homogeneous projective transformation matrix HomMat2D, and the radial distortion coefficient Kappa that optimally fulfill the following equation:
The returned Kappa can be used to construct camera parameters that can be used to rectify images or points (see change_radial_distortion_cam_par, change_radial_distortion_image, and change_radial_distortion_points):
CamPar = [0.0,Kappa,1.0,1.0,0.5*(w-1),0.5*(h-1),w,h]
The approximations HomMat2DGuide and KappaGuide can, for example, be calculated with proj_match_points_distortion_ransac on lower resolution versions of Image1 and Image2. See the example below.
The matching process is based on characteristic points, which can be extracted with point operators like points_foerstner or points_harris. The matching itself is carried out in two steps: first, gray value correlations of mask windows around the input points in the first and the second image are determined and an initial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC algorithm is applied to find the projective transformation matrix and radial distortion coefficient that maximizes the number of correspondences under the above constraint.
The size of the mask windows used for the matching is MaskSize x MaskSize. Three metrics for the correlation can be selected. If GrayMatchMethod has the value 'ssd', the sum of the squared gray value differences is used, 'sad' means the sum of absolute differences, and 'ncc' is the normalized cross correlation. For details please refer to binocular_disparity. The metric is minimized ('ssd', 'sad') or maximized ('ncc') over all possible point pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThreshold ('ssd', 'sad') or above that value ('ncc').
To increase the algorithm's performance, the search area for the match candidates is limited based on the approximate transformation specified by HomMat2DGuide and KappaGuide. Only points within a distance of DistanceTolerance around the point in Image2 that is obtained when transforming a point in Image1 via HomMat2DGuide and KappaGuide are considered for the matching.
After the initial matching has been completed, a randomized search algorithm (RANSAC) is used to determine the projective transformation matrix HomMat2D and the radial distortion coefficient Kappa. It tries to find the parameters that are consistent with a maximum number of correspondences. For a point to be accepted, the distance to its corresponding transformed point must not exceed the threshold DistanceThreshold. Consequently, DistanceThreshold should be smaller than DistanceTolerance.
The parameter EstimationMethod determines which algorithm is used to compute the projective transformation matrix. A linear algorithm is used if EstimationMethod is set to 'linear'. This algorithm is very fast and returns accurate results for small to moderate noise of the point coordinates and for most distortions (except for small distortions). For EstimationMethod = 'gold_standard', a mathematically optimal but slower optimization is used, which minimizes the geometric reprojection error. In general, it is preferable to use EstimationMethod = 'gold_standard'.
The value Error indicates the overall quality of the estimation procedure and is the mean symmetric euclidian distance in pixels between the points and their corresponding transformed points.
Point pairs consistent with the above constraints are considered to be corresponding points. Points1 contains the indices of the matched input points from the first image and Points2 contains the indices of the corresponding points in the second image.
The parameter RandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to obtain reproducible results. If RandSeed is set to a positive number, the operator returns the same result on every call with the same parameters because the internally used random number generator is initialized with RandSeed. If RandSeed = 0, the random number generator is initialized with the current time. In this case the results may not be reproducible.
Input image 1.
Input image 2.
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 5
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
Input points in image 2 (row coordinate).
Restriction: length(Rows2) >= 5
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows2)
Gray value match metric.
Default value: 'ncc'
List of values: 'ncc', 'sad', 'ssd'
Size of gray value masks.
Default value: 10
Typical range of values: 3 ≤ MaskSize ≤ 15
Restriction: MaskSize >= 1
Approximation of the homogeneous projective transformation matrix between the two images.
Approximation of the radial distortion coefficient in the two images.
Tolerance for the matching search window.
Default value: 20.0
Suggested values: 0.2, 0.5, 1.0, 2.0, 3.0, 5.0, 10.0, 20.0, 50.0
Restriction: DistanceTolerance > 0
Threshold for gray value matching.
Default value: 0.7
Suggested values: 0.9, 0.7, 0.5, 10, 20, 50, 100
Algorithm for the computation of the projective transformation matrix.
Default value: 'gold_standard'
List of values: 'gold_standard', 'linear'
Threshold for transformation consistency check.
Default value: 1
Restriction: DistanceThreshold > 0
Seed for the random number generator.
Default value: 0
Computed homogeneous projective transformation matrix.
Computed radial distortion coefficient.
Root-Mean-Square transformation error.
Indices of matched input points in image 1.
Indices of matched input points in image 2.
Factor := 0.5 zoom_image_factor (Image1, Image1Zoomed, Factor, Factor, 'constant') zoom_image_factor (Image2, Image2Zoomed, Factor, Factor, 'constant') points_foerstner (Image1Zoomed, 1, 2, 3, 200, 0.3, 'gauss', 'true', \ Rows1, Cols1, _, _, _, _, _, _, _, _) points_foerstner (Image2Zoomed, 1, 2, 3, 200, 0.3, 'gauss', 'true', \ Rows2, Cols2, _, _, _, _, _, _, _, _) get_image_size (Image1Zoomed, Width, Height) proj_match_points_distortion_ransac (Image1Zoomed, Image2Zoomed, \ Rows1, Cols1, Rows2, Cols2, \ 'ncc', 10, 0, 0, Height, Width, \ 0, 0.5, 'gold_standard', 2, 0, \ HomMat2D, Kappa, Error, \ Points1, Points2) hom_mat2d_scale_local (HomMat2D, Factor, Factor, HomMat2DGuide) hom_mat2d_scale (HomMat2DGuide, 1.0/Factor, 1.0/Factor, 0, 0, \ HomMat2DGuide) KappaGuide := Kappa*Factor*Factor points_foerstner (Image1, 1, 2, 3, 200, 0.3, 'gauss', 'true', \ Rows1, Cols1, _, _, _, _, _, _, _, _) points_foerstner (Image2, 1, 2, 3, 200, 0.3, 'gauss', 'true', \ Rows2, Cols2, _, _, _, _, _, _, _, _) proj_match_points_distortion_ransac_guided (Image1, Image2, \ Rows1, Cols1, \ Rows2, Cols2, \ 'ncc', 10, \ HomMat2DGuide, \ KappaGuide, 5, 0.5, \ 'gold_standard', 2, 0, \ HomMat2D, Kappa, \ Error, Points1, Points2) get_image_size (Image1, Width, Height) CamParDist := [0.0,Kappa,1.0,1.0,0.5*(Width-1),0.5*(Height-1), \ Width,Height] change_radial_distortion_cam_par ('fixed', CamParDist, 0, CamPar) change_radial_distortion_image (Image1, Image1, Image1Rect, \ CamParDist, CamPar) change_radial_distortion_image (Image2, Image2, Image2Rect, \ CamParDist, CamPar) concat_obj (Image1Rect, Image2Rect, ImagesRect) gen_projective_mosaic (ImagesRect, MosaicImage, 1, 1, 2, HomMat2D, \ 'default', 'false', MosaicMatrices2D)
vector_to_proj_hom_mat2d_distortion, change_radial_distortion_cam_par, change_radial_distortion_image, change_radial_distortion_points, gen_binocular_proj_rectification, projective_trans_image, projective_trans_image_size, projective_trans_region, projective_trans_contour_xld, projective_trans_point_2d, projective_trans_pixel
proj_match_points_ransac, proj_match_points_ransac_guided, hom_vector_to_proj_hom_mat2d, vector_to_proj_hom_mat2d
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in
Computer Vision”; Cambridge University Press, Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.