vector_to_proj_hom_mat2dT_vector_to_proj_hom_mat2dVectorToProjHomMat2dVectorToProjHomMat2dvector_to_proj_hom_mat2d (Operator)

Name

vector_to_proj_hom_mat2dT_vector_to_proj_hom_mat2dVectorToProjHomMat2dVectorToProjHomMat2dvector_to_proj_hom_mat2d — Compute a projective transformation matrix using given point correspondences.

Signature

vector_to_proj_hom_mat2d( : : Px, Py, Qx, Qy, Method, CovXX1, CovYY1, CovXY1, CovXX2, CovYY2, CovXY2 : HomMat2D, Covariance)

Herror T_vector_to_proj_hom_mat2d(const Htuple Px, const Htuple Py, const Htuple Qx, const Htuple Qy, const Htuple Method, const Htuple CovXX1, const Htuple CovYY1, const Htuple CovXY1, const Htuple CovXX2, const Htuple CovYY2, const Htuple CovXY2, Htuple* HomMat2D, Htuple* Covariance)

void VectorToProjHomMat2d(const HTuple& Px, const HTuple& Py, const HTuple& Qx, const HTuple& Qy, const HTuple& Method, const HTuple& CovXX1, const HTuple& CovYY1, const HTuple& CovXY1, const HTuple& CovXX2, const HTuple& CovYY2, const HTuple& CovXY2, HTuple* HomMat2D, HTuple* Covariance)

HTuple HHomMat2D::VectorToProjHomMat2d(const HTuple& Px, const HTuple& Py, const HTuple& Qx, const HTuple& Qy, const HString& Method, const HTuple& CovXX1, const HTuple& CovYY1, const HTuple& CovXY1, const HTuple& CovXX2, const HTuple& CovYY2, const HTuple& CovXY2)

HTuple HHomMat2D::VectorToProjHomMat2d(const HTuple& Px, const HTuple& Py, const HTuple& Qx, const HTuple& Qy, const char* Method, const HTuple& CovXX1, const HTuple& CovYY1, const HTuple& CovXY1, const HTuple& CovXX2, const HTuple& CovYY2, const HTuple& CovXY2)

HTuple HHomMat2D::VectorToProjHomMat2d(const HTuple& Px, const HTuple& Py, const HTuple& Qx, const HTuple& Qy, const wchar_t* Method, const HTuple& CovXX1, const HTuple& CovYY1, const HTuple& CovXY1, const HTuple& CovXX2, const HTuple& CovYY2, const HTuple& CovXY2)   (Windows only)

static void HOperatorSet.VectorToProjHomMat2d(HTuple px, HTuple py, HTuple qx, HTuple qy, HTuple method, HTuple covXX1, HTuple covYY1, HTuple covXY1, HTuple covXX2, HTuple covYY2, HTuple covXY2, out HTuple homMat2D, out HTuple covariance)

HTuple HHomMat2D.VectorToProjHomMat2d(HTuple px, HTuple py, HTuple qx, HTuple qy, string method, HTuple covXX1, HTuple covYY1, HTuple covXY1, HTuple covXX2, HTuple covYY2, HTuple covXY2)

def vector_to_proj_hom_mat2d(px: Sequence[Union[float, int]], py: Sequence[Union[float, int]], qx: Sequence[float], qy: Sequence[float], method: str, cov_xx1: Sequence[float], cov_yy1: Sequence[float], cov_xy1: Sequence[float], cov_xx2: Sequence[float], cov_yy2: Sequence[float], cov_xy2: Sequence[float]) -> Tuple[Sequence[float], Sequence[float]]

Description

vector_to_proj_hom_mat2dvector_to_proj_hom_mat2dVectorToProjHomMat2dVectorToProjHomMat2dVectorToProjHomMat2dvector_to_proj_hom_mat2d determines the homogeneous projective transformation matrix HomMat2DHomMat2DHomMat2DHomMat2DhomMat2Dhom_mat_2d that optimally fulfills the following equations given by at least 4 point correspondences If fewer than 4 pairs of points (PxPxPxPxpxpx,PyPyPyPypypy), (QxQxQxQxqxqx,QyQyQyQyqyqy) are given, there exists no unique solution, if exactly 4 pairs are supplied the matrix HomMat2DHomMat2DHomMat2DHomMat2DhomMat2Dhom_mat_2d transforms them in exactly the desired way, and if there are more than 4 point pairs given, vector_to_proj_hom_mat2dvector_to_proj_hom_mat2dVectorToProjHomMat2dVectorToProjHomMat2dVectorToProjHomMat2dvector_to_proj_hom_mat2d seeks to minimize the transformation error. To achieve such a minimization, several different algorithms are available. The algorithm to use can be chosen using the parameter MethodMethodMethodMethodmethodmethod. MethodMethodMethodMethodmethodmethod='dlt'"dlt""dlt""dlt""dlt""dlt" uses a fast and simple, but also rather inaccurate error estimation algorithm while MethodMethodMethodMethodmethodmethod='normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" offers a good compromise between speed and accuracy. Finally, MethodMethodMethodMethodmethodmethod='gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" performs a mathematically optimal but slower optimization.

If 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" is used and the input points have been obtained from an operator like points_foerstnerpoints_foerstnerPointsFoerstnerPointsFoerstnerPointsFoerstnerpoints_foerstner, which provides a covariance matrix for each of the points, which specifies the accuracy of the points, this can be taken into account by using the input parameters CovYY1CovYY1CovYY1CovYY1covYY1cov_yy1, CovXX1CovXX1CovXX1CovXX1covXX1cov_xx1, CovXY1CovXY1CovXY1CovXY1covXY1cov_xy1 for the points in the first image and CovYY2CovYY2CovYY2CovYY2covYY2cov_yy2, CovXX2CovXX2CovXX2CovXX2covXX2cov_xx2, CovXY2CovXY2CovXY2CovXY2covXY2cov_xy2 for the points in the second image. The covariances are symmetric 2×2 matrices. CovXX1CovXX1CovXX1CovXX1covXX1cov_xx1/CovXX2CovXX2CovXX2CovXX2covXX2cov_xx2 and CovYY1CovYY1CovYY1CovYY1covYY1cov_yy1/CovYY2CovYY2CovYY2CovYY2covYY2cov_yy2 are a list of diagonal entries while CovXY1CovXY1CovXY1CovXY1covXY1cov_xy1/CovXY2CovXY2CovXY2CovXY2covXY2cov_xy2 contains the non-diagonal entries which appear twice in a symmetric matrix. If a different MethodMethodMethodMethodmethodmethod than 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" is used or the covariances are unknown the covariance parameters can be left empty.

In contrast to hom_vector_to_proj_hom_mat2dhom_vector_to_proj_hom_mat2dHomVectorToProjHomMat2dHomVectorToProjHomMat2dHomVectorToProjHomMat2dhom_vector_to_proj_hom_mat2d, points at infinity cannot be used to determine the transformation in vector_to_proj_hom_mat2dvector_to_proj_hom_mat2dVectorToProjHomMat2dVectorToProjHomMat2dVectorToProjHomMat2dvector_to_proj_hom_mat2d. If this is necessary, hom_vector_to_proj_hom_mat2dhom_vector_to_proj_hom_mat2dHomVectorToProjHomMat2dHomVectorToProjHomMat2dHomVectorToProjHomMat2dhom_vector_to_proj_hom_mat2d must be used. If the correspondence between the points has not been determined, proj_match_points_ransacproj_match_points_ransacProjMatchPointsRansacProjMatchPointsRansacProjMatchPointsRansacproj_match_points_ransac should be used to determine the correspondence as well as the transformation.

If the points to transform are specified in standard image coordinates, their row coordinates must be passed in PxPxPxPxpxpx and their column coordinates in PyPyPyPypypy. This is necessary to obtain a right-handed coordinate system for the image. In particular, this assures that rotations are performed in the correct direction. Note that the (x,y) order of the matrices quite naturally corresponds to the usual (row,column) order for coordinates in the image.

Attention

It should be noted that homogeneous transformation matrices refer to a general right-handed mathematical coordinate system. If a homogeneous transformation matrix is used to transform images, regions, XLD contours, or any other data that has been extracted from images, the row coordinates of the transformation must be passed in the x coordinates, while the column coordinates must be passed in the y coordinates. Consequently, the order of passing row and column coordinates follows the usual order (RowRowRowRowrowrow,ColumnColumnColumnColumncolumncolumn). This convention is essential to obtain a right-handed coordinate system for the transformation of iconic data, and consequently to ensure in particular that rotations are performed in the correct mathematical direction.

Furthermore, it should be noted that if a homogeneous transformation matrix is used to transform images, regions, XLD contours, or any other data that has been extracted from images, it is assumed that the origin of the coordinate system of the homogeneous transformation matrix lies in the upper left corner of a pixel. The image processing operators that return point coordinates, however, assume a coordinate system in which the origin lies in the center of a pixel. Therefore, to obtain a consistent homogeneous transformation matrix, 0.5 must be added to the point coordinates before computing the transformation.

Execution Information

Parameters

PxPxPxPxpxpx (input_control)  point.x-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 1 (row coordinate).

PyPyPyPypypy (input_control)  point.y-array HTupleSequence[Union[float, int]]HTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Input points in image 1 (column coordinate).

QxQxQxQxqxqx (input_control)  point.x-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Input points in image 2 (row coordinate).

QyQyQyQyqyqy (input_control)  point.y-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Input points in image 2 (column coordinate).

MethodMethodMethodMethodmethodmethod (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Estimation algorithm.

Default value: 'normalized_dlt' "normalized_dlt" "normalized_dlt" "normalized_dlt" "normalized_dlt" "normalized_dlt"

List of values: 'dlt'"dlt""dlt""dlt""dlt""dlt", 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard", 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt"

CovXX1CovXX1CovXX1CovXX1covXX1cov_xx1 (input_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Row coordinate variance of the points in image 1.

Default value: []

CovYY1CovYY1CovYY1CovYY1covYY1cov_yy1 (input_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Column coordinate variance of the points in image 1.

Default value: []

CovXY1CovXY1CovXY1CovXY1covXY1cov_xy1 (input_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Covariance of the points in image 1.

Default value: []

CovXX2CovXX2CovXX2CovXX2covXX2cov_xx2 (input_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Row coordinate variance of the points in image 2.

Default value: []

CovYY2CovYY2CovYY2CovYY2covYY2cov_yy2 (input_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Column coordinate variance of the points in image 2.

Default value: []

CovXY2CovXY2CovXY2CovXY2covXY2cov_xy2 (input_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Covariance of the points in image 2.

Default value: []

HomMat2DHomMat2DHomMat2DHomMat2DhomMat2Dhom_mat_2d (output_control)  hom_mat2d HHomMat2D, HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

Homogeneous projective transformation matrix.

CovarianceCovarianceCovarianceCovariancecovariancecovariance (output_control)  real-array HTupleSequence[float]HTupleHtuple (real) (double) (double) (double)

9×9 covariance matrix of the projective transformation matrix.

Possible Predecessors

proj_match_points_ransacproj_match_points_ransacProjMatchPointsRansacProjMatchPointsRansacProjMatchPointsRansacproj_match_points_ransac, proj_match_points_ransac_guidedproj_match_points_ransac_guidedProjMatchPointsRansacGuidedProjMatchPointsRansacGuidedProjMatchPointsRansacGuidedproj_match_points_ransac_guided, points_foerstnerpoints_foerstnerPointsFoerstnerPointsFoerstnerPointsFoerstnerpoints_foerstner, points_harrispoints_harrisPointsHarrisPointsHarrisPointsHarrispoints_harris

Possible Successors

projective_trans_imageprojective_trans_imageProjectiveTransImageProjectiveTransImageProjectiveTransImageprojective_trans_image, projective_trans_image_sizeprojective_trans_image_sizeProjectiveTransImageSizeProjectiveTransImageSizeProjectiveTransImageSizeprojective_trans_image_size, projective_trans_regionprojective_trans_regionProjectiveTransRegionProjectiveTransRegionProjectiveTransRegionprojective_trans_region, projective_trans_contour_xldprojective_trans_contour_xldProjectiveTransContourXldProjectiveTransContourXldProjectiveTransContourXldprojective_trans_contour_xld, projective_trans_point_2dprojective_trans_point_2dProjectiveTransPoint2dProjectiveTransPoint2dProjectiveTransPoint2dprojective_trans_point_2d, projective_trans_pixelprojective_trans_pixelProjectiveTransPixelProjectiveTransPixelProjectiveTransPixelprojective_trans_pixel

Alternatives

hom_vector_to_proj_hom_mat2dhom_vector_to_proj_hom_mat2dHomVectorToProjHomMat2dHomVectorToProjHomMat2dHomVectorToProjHomMat2dhom_vector_to_proj_hom_mat2d, proj_match_points_ransacproj_match_points_ransacProjMatchPointsRansacProjMatchPointsRansacProjMatchPointsRansacproj_match_points_ransac, proj_match_points_ransac_guidedproj_match_points_ransac_guidedProjMatchPointsRansacGuidedProjMatchPointsRansacGuidedProjMatchPointsRansacGuidedproj_match_points_ransac_guided

References

Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press, Cambridge; 2000.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.

Module

Calibration