ClassesClassesClassesClasses | | | | Operators

match_rel_pose_ransacT_match_rel_pose_ransacMatchRelPoseRansacmatch_rel_pose_ransacMatchRelPoseRansacMatchRelPoseRansac (Operator)

Name

match_rel_pose_ransacT_match_rel_pose_ransacMatchRelPoseRansacmatch_rel_pose_ransacMatchRelPoseRansacMatchRelPoseRansac — Compute the relative orientation between two cameras by automatically finding correspondences between image points.

Signature

match_rel_pose_ransac(Image1, Image2 : : Rows1, Cols1, Rows2, Cols2, CamPar1, CamPar2, GrayMatchMethod, MaskSize, RowMove, ColMove, RowTolerance, ColTolerance, Rotation, MatchThreshold, EstimationMethod, DistanceThreshold, RandSeed : RelPose, CovRelPose, Error, Points1, Points2)

Herror T_match_rel_pose_ransac(const Hobject Image1, const Hobject Image2, const Htuple Rows1, const Htuple Cols1, const Htuple Rows2, const Htuple Cols2, const Htuple CamPar1, const Htuple CamPar2, const Htuple GrayMatchMethod, const Htuple MaskSize, const Htuple RowMove, const Htuple ColMove, const Htuple RowTolerance, const Htuple ColTolerance, const Htuple Rotation, const Htuple MatchThreshold, const Htuple EstimationMethod, const Htuple DistanceThreshold, const Htuple RandSeed, Htuple* RelPose, Htuple* CovRelPose, Htuple* Error, Htuple* Points1, Htuple* Points2)

Herror match_rel_pose_ransac(Hobject Image1, Hobject Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const HTuple& GrayMatchMethod, const HTuple& MaskSize, const HTuple& RowMove, const HTuple& ColMove, const HTuple& RowTolerance, const HTuple& ColTolerance, const HTuple& Rotation, const HTuple& MatchThreshold, const HTuple& EstimationMethod, const HTuple& DistanceThreshold, const HTuple& RandSeed, HTuple* RelPose, HTuple* CovRelPose, HTuple* Error, HTuple* Points1, HTuple* Points2)

HTuple HImage::MatchRelPoseRansac(const HImage& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const HTuple& GrayMatchMethod, const HTuple& MaskSize, const HTuple& RowMove, const HTuple& ColMove, const HTuple& RowTolerance, const HTuple& ColTolerance, const HTuple& Rotation, const HTuple& MatchThreshold, const HTuple& EstimationMethod, const HTuple& DistanceThreshold, const HTuple& RandSeed, HTuple* CovRelPose, HTuple* Error, HTuple* Points1, HTuple* Points2) const

void MatchRelPoseRansac(const HObject& Image1, const HObject& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const HTuple& GrayMatchMethod, const HTuple& MaskSize, const HTuple& RowMove, const HTuple& ColMove, const HTuple& RowTolerance, const HTuple& ColTolerance, const HTuple& Rotation, const HTuple& MatchThreshold, const HTuple& EstimationMethod, const HTuple& DistanceThreshold, const HTuple& RandSeed, HTuple* RelPose, HTuple* CovRelPose, HTuple* Error, HTuple* Points1, HTuple* Points2)

HPose HImage::MatchRelPoseRansac(const HImage& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const HString& GrayMatchMethod, Hlong MaskSize, Hlong RowMove, Hlong ColMove, Hlong RowTolerance, Hlong ColTolerance, const HTuple& Rotation, const HTuple& MatchThreshold, const HString& EstimationMethod, const HTuple& DistanceThreshold, Hlong RandSeed, HTuple* CovRelPose, HTuple* Error, HTuple* Points1, HTuple* Points2) const

HPose HImage::MatchRelPoseRansac(const HImage& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const HString& GrayMatchMethod, Hlong MaskSize, Hlong RowMove, Hlong ColMove, Hlong RowTolerance, Hlong ColTolerance, double Rotation, Hlong MatchThreshold, const HString& EstimationMethod, double DistanceThreshold, Hlong RandSeed, HTuple* CovRelPose, double* Error, HTuple* Points1, HTuple* Points2) const

HPose HImage::MatchRelPoseRansac(const HImage& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const char* GrayMatchMethod, Hlong MaskSize, Hlong RowMove, Hlong ColMove, Hlong RowTolerance, Hlong ColTolerance, double Rotation, Hlong MatchThreshold, const char* EstimationMethod, double DistanceThreshold, Hlong RandSeed, HTuple* CovRelPose, double* Error, HTuple* Points1, HTuple* Points2) const

HTuple HPose::MatchRelPoseRansac(const HImage& Image1, const HImage& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const HString& GrayMatchMethod, Hlong MaskSize, Hlong RowMove, Hlong ColMove, Hlong RowTolerance, Hlong ColTolerance, const HTuple& Rotation, const HTuple& MatchThreshold, const HString& EstimationMethod, const HTuple& DistanceThreshold, Hlong RandSeed, HTuple* Error, HTuple* Points1, HTuple* Points2)

HTuple HPose::MatchRelPoseRansac(const HImage& Image1, const HImage& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const HString& GrayMatchMethod, Hlong MaskSize, Hlong RowMove, Hlong ColMove, Hlong RowTolerance, Hlong ColTolerance, double Rotation, Hlong MatchThreshold, const HString& EstimationMethod, double DistanceThreshold, Hlong RandSeed, double* Error, HTuple* Points1, HTuple* Points2)

HTuple HPose::MatchRelPoseRansac(const HImage& Image1, const HImage& Image2, const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CamPar1, const HTuple& CamPar2, const char* GrayMatchMethod, Hlong MaskSize, Hlong RowMove, Hlong ColMove, Hlong RowTolerance, Hlong ColTolerance, double Rotation, Hlong MatchThreshold, const char* EstimationMethod, double DistanceThreshold, Hlong RandSeed, double* Error, HTuple* Points1, HTuple* Points2)

void HOperatorSetX.MatchRelPoseRansac(
[in] IHUntypedObjectX* Image1, [in] IHUntypedObjectX* Image2, [in] VARIANT Rows1, [in] VARIANT Cols1, [in] VARIANT Rows2, [in] VARIANT Cols2, [in] VARIANT CamPar1, [in] VARIANT CamPar2, [in] VARIANT GrayMatchMethod, [in] VARIANT MaskSize, [in] VARIANT RowMove, [in] VARIANT ColMove, [in] VARIANT RowTolerance, [in] VARIANT ColTolerance, [in] VARIANT Rotation, [in] VARIANT MatchThreshold, [in] VARIANT EstimationMethod, [in] VARIANT DistanceThreshold, [in] VARIANT RandSeed, [out] VARIANT* RelPose, [out] VARIANT* CovRelPose, [out] VARIANT* Error, [out] VARIANT* Points1, [out] VARIANT* Points2)

VARIANT HImageX.MatchRelPoseRansac(
[in] IHImageX* Image2, [in] VARIANT Rows1, [in] VARIANT Cols1, [in] VARIANT Rows2, [in] VARIANT Cols2, [in] VARIANT CamPar1, [in] VARIANT CamPar2, [in] BSTR GrayMatchMethod, [in] Hlong MaskSize, [in] Hlong RowMove, [in] Hlong ColMove, [in] Hlong RowTolerance, [in] Hlong ColTolerance, [in] VARIANT Rotation, [in] VARIANT MatchThreshold, [in] BSTR EstimationMethod, [in] VARIANT DistanceThreshold, [in] Hlong RandSeed, [out] VARIANT* CovRelPose, [out] VARIANT* Error, [out] VARIANT* Points1, [out] VARIANT* Points2)

VARIANT HPoseX.MatchRelPoseRansac(
[in] IHImageX* Image1, [in] IHImageX* Image2, [in] VARIANT Rows1, [in] VARIANT Cols1, [in] VARIANT Rows2, [in] VARIANT Cols2, [in] VARIANT CamPar1, [in] VARIANT CamPar2, [in] BSTR GrayMatchMethod, [in] Hlong MaskSize, [in] Hlong RowMove, [in] Hlong ColMove, [in] Hlong RowTolerance, [in] Hlong ColTolerance, [in] VARIANT Rotation, [in] VARIANT MatchThreshold, [in] BSTR EstimationMethod, [in] VARIANT DistanceThreshold, [in] Hlong RandSeed, [out] VARIANT* CovRelPose, [out] VARIANT* Error, [out] VARIANT* Points1, [out] VARIANT* Points2)

static void HOperatorSet.MatchRelPoseRansac(HObject image1, HObject image2, HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple camPar1, HTuple camPar2, HTuple grayMatchMethod, HTuple maskSize, HTuple rowMove, HTuple colMove, HTuple rowTolerance, HTuple colTolerance, HTuple rotation, HTuple matchThreshold, HTuple estimationMethod, HTuple distanceThreshold, HTuple randSeed, out HTuple relPose, out HTuple covRelPose, out HTuple error, out HTuple points1, out HTuple points2)

HPose HImage.MatchRelPoseRansac(HImage image2, HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple camPar1, HTuple camPar2, string grayMatchMethod, int maskSize, int rowMove, int colMove, int rowTolerance, int colTolerance, HTuple rotation, HTuple matchThreshold, string estimationMethod, HTuple distanceThreshold, int randSeed, out HTuple covRelPose, out HTuple error, out HTuple points1, out HTuple points2)

HPose HImage.MatchRelPoseRansac(HImage image2, HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple camPar1, HTuple camPar2, string grayMatchMethod, int maskSize, int rowMove, int colMove, int rowTolerance, int colTolerance, double rotation, int matchThreshold, string estimationMethod, double distanceThreshold, int randSeed, out HTuple covRelPose, out double error, out HTuple points1, out HTuple points2)

HTuple HPose.MatchRelPoseRansac(HImage image1, HImage image2, HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple camPar1, HTuple camPar2, string grayMatchMethod, int maskSize, int rowMove, int colMove, int rowTolerance, int colTolerance, HTuple rotation, HTuple matchThreshold, string estimationMethod, HTuple distanceThreshold, int randSeed, out HTuple error, out HTuple points1, out HTuple points2)

HTuple HPose.MatchRelPoseRansac(HImage image1, HImage image2, HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple camPar1, HTuple camPar2, string grayMatchMethod, int maskSize, int rowMove, int colMove, int rowTolerance, int colTolerance, double rotation, int matchThreshold, string estimationMethod, double distanceThreshold, int randSeed, out double error, out HTuple points1, out HTuple points2)

Description

Given a set of coordinates of characteristic points (Rows1Rows1Rows1Rows1Rows1rows1,Cols1Cols1Cols1Cols1Cols1cols1) and (Rows2Rows2Rows2Rows2Rows2rows2,Cols2Cols2Cols2Cols2Cols2cols2) in the stereo images Image1Image1Image1Image1Image1image1 and Image2Image2Image2Image2Image2image2 along with known internal camera parameters CamPar1CamPar1CamPar1CamPar1CamPar1camPar1 and CamPar2CamPar2CamPar2CamPar2CamPar2camPar2, match_rel_pose_ransacmatch_rel_pose_ransacMatchRelPoseRansacmatch_rel_pose_ransacMatchRelPoseRansacMatchRelPoseRansac automatically determines the geometry of the stereo setup and finds the correspondences between the characteristic points. The geometry of the stereo setup is represented by the relative pose RelPoseRelPoseRelPoseRelPoseRelPoserelPose and all corresponding points have to fulfill the epipolar constraint. RelPoseRelPoseRelPoseRelPoseRelPoserelPose indicates the relative pose of camera 1 with respect to camera 2 (See create_posecreate_poseCreatePosecreate_poseCreatePoseCreatePose for more information about poses and their representations.). This is in accordance with the explicit calibration of a stereo setup using the operator calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras. Now, let R,t be the rotation and translation of the relative pose. Then, the essential matrix E is defined as E=([t]_x R)^T, where [t]_x denotes the 3x3 skew-symmetric matrix realising the cross product with the vector t. The pose can be determined from the epipolar constraint:

            T
      / X2 \            T   / X1 \                     /   0  -t_z  t_y \
      | Y2 | * ([t]_x R)  * | Y1 | = 0  where  [t]_x = |  t_z   0  -t_x |  .
      \  1 /                \  1 /                     \ -t_y  t_x   0  /

Note, that the essential matrix is a projective entity and thus is defined up to a scaling factor. From this follows that the translation vector of the relative pose can only be determined up to scale too. In fact, the computed translation vector will always be normalized to unit length. As a consequence, a subsequent threedimensional reconstruction of the scene, using for instance vector_to_rel_posevector_to_rel_poseVectorToRelPosevector_to_rel_poseVectorToRelPoseVectorToRelPose, can be carried out only up to a single global scaling factor.

The operator match_rel_pose_ransacmatch_rel_pose_ransacMatchRelPoseRansacmatch_rel_pose_ransacMatchRelPoseRansacMatchRelPoseRansac is designed to deal with a camera model, that includes lens distortions. This is in contrast to the operator match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansac, which encompasses only straight line preserving cameras. The camera parameters are passed in CamPar1CamPar1CamPar1CamPar1CamPar1camPar1 and CamPar2CamPar2CamPar2CamPar2CamPar2camPar2. The 3D direction vectors (X1,Y1,1) and (X2,Y2,1) are calculated from the point coordinates (Rows1Rows1Rows1Rows1Rows1rows1,Cols1Cols1Cols1Cols1Cols1cols1) and (Rows2Rows2Rows2Rows2Rows2rows2,Cols2Cols2Cols2Cols2Cols2cols2) by inverting the process of projection (see calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras).

The matching process is based on characteristic points, which can be extracted with point operators like points_foerstnerpoints_foerstnerPointsFoerstnerpoints_foerstnerPointsFoerstnerPointsFoerstner or points_harrispoints_harrisPointsHarrispoints_harrisPointsHarrisPointsHarris. The matching itself is carried out in two steps: first, gray value correlations of mask windows around the input points in the first and the second image are determined and an initial matching between them is generated using the similarity of the windows in both images. Then, the RANSAC algorithm is applied to find the relative pose that maximizes the number of correspondences under the epipolar constraint.

The size of the mask windows is MaskSizeMaskSizeMaskSizeMaskSizeMaskSizemaskSize x MaskSizeMaskSizeMaskSizeMaskSizeMaskSizemaskSize. Three metrics for the correlation can be selected. If GrayMatchMethodGrayMatchMethodGrayMatchMethodGrayMatchMethodGrayMatchMethodgrayMatchMethod has the value 'ssd'"ssd""ssd""ssd""ssd""ssd", the sum of the squared gray value differences is used, 'sad'"sad""sad""sad""sad""sad" means the sum of absolute differences, and 'ncc'"ncc""ncc""ncc""ncc""ncc" is the normalized cross correlation. For details please refer to binocular_disparitybinocular_disparityBinocularDisparitybinocular_disparityBinocularDisparityBinocularDisparity. The metric is minimized ('ssd'"ssd""ssd""ssd""ssd""ssd", 'sad'"sad""sad""sad""sad""sad") or maximized ('ncc'"ncc""ncc""ncc""ncc""ncc") over all possible point pairs. A thus found matching is only accepted if the value of the metric is below the value of MatchThresholdMatchThresholdMatchThresholdMatchThresholdMatchThresholdmatchThreshold ('ssd'"ssd""ssd""ssd""ssd""ssd", 'sad'"sad""sad""sad""sad""sad") or above that value ('ncc'"ncc""ncc""ncc""ncc""ncc").

To increase the speed of the algorithm, the search area for the matchings can be limited. Only points within a window of 2*RowToleranceRowToleranceRowToleranceRowToleranceRowTolerancerowTolerance x 2*ColToleranceColToleranceColToleranceColToleranceColTolerancecolTolerance points are considered. The offset of the center of the search window in the second image with respect to the position of the current point in the first image is given by RowMoveRowMoveRowMoveRowMoveRowMoverowMove and ColMoveColMoveColMoveColMoveColMovecolMove.

If the second camera is rotated around the optical axis with respect to the first camera the parameter RotationRotationRotationRotationRotationrotation may contain an estimate for the rotation angle or an angle interval in radians. A good guess will increase the quality of the gray value matching. If the actual rotation differs too much from the specified estimate the matching will typically fail. In this case, an angle interval should be specified, and RotationRotationRotationRotationRotationrotation is a tuple with two elements. The larger the given interval the slower the operator is since the RANSAC algorithm is run over all angle increments within the interval.

After the initial matching is completed a randomized search algorithm (RANSAC) is used to determine the relative pose RelPoseRelPoseRelPoseRelPoseRelPoserelPose. It tries to find the relative pose that is consistent with a maximum number of correspondences. For a point to be accepted, the distance to its corresponding epipolar line must not exceed the threshold DistanceThresholdDistanceThresholdDistanceThresholdDistanceThresholdDistanceThresholddistanceThreshold.

The parameter EstimationMethodEstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod decides whether the relative orientation between the cameras is of a special type and which algorithm is to be applied for its computation. If EstimationMethodEstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod is either 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" the relative orientation is arbitrary. Choosing 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard" means that the relative motion between the cameras is a pure translation. The typical application for this special motion case is the scenario of a single fixed camera looking onto a moving conveyor belt. In order to get a unique solution in the correspondence problem the minimum required number of corresponding points is six in the general case and three in the special, translational case.

The relative pose is computed by a linear algorithm if 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" is chosen. With 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard" the algorithm gives a statistically optimal result, and returns as well the covariance of the relative pose CovRelPoseCovRelPoseCovRelPoseCovRelPoseCovRelPosecovRelPose. Here, 'normalized_dlt' and 'gold_standard' stand for direct-linear-transformation and gold-standard-algorithm respectively. Note, that in general the found correspondences differ depending on the deployed estimation method.

The value ErrorErrorErrorErrorErrorerror indicates the overall quality of the estimation procedure and is the mean euclidian distance in pixels between the points and their corresponding epipolar lines.

Point pairs consistent with the mentioned constraints are considered to be in correspondences. Points1Points1Points1Points1Points1points1 contains the indices of the matched input points from the first image and Points2Points2Points2Points2Points2points2 contains the indices of the corresponding points in the second image.

For the operator match_rel_pose_ransacmatch_rel_pose_ransacMatchRelPoseRansacmatch_rel_pose_ransacMatchRelPoseRansacMatchRelPoseRansac a special configuration of scene points and cameras exists: if all 3D points lie in a single plane and additionally are all closer to one of the two cameras then the solution in the essential matrix is not unique but twofold. As a consequence both solutions are computed and returned by the operator. This means that the output parameters RelPoseRelPoseRelPoseRelPoseRelPoserelPose, CovRelPoseCovRelPoseCovRelPoseCovRelPoseCovRelPosecovRelPose and ErrorErrorErrorErrorErrorerror are of double length and the values of the second solution are simply concatenated behind the values of the first one.

The parameter RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed can be used to control the randomized nature of the RANSAC algorithm, and hence to obtain reproducible results. If RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed is set to a positive number the operator yields the same result on every call with the same parameters because the internally used random number generator is initialized with the RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed. If RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed = 0 the random number generator is initialized with the current time. In this case the results may not be reproducible.

Parallelization

Parameters

Image1Image1Image1Image1Image1image1 (input_object)  singlechannelimage objectHImageHImageHImageHImageXHobject (byte / uint2)

Input image 1.

Image2Image2Image2Image2Image2image2 (input_object)  singlechannelimage objectHImageHImageHImageHImageXHobject (byte / uint2)

Input image 2.

Rows1Rows1Rows1Rows1Rows1rows1 (input_control)  number-array HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Row coordinates of characteristic points in image 1.

Restriction: length(Rows1) >= 6 || length(Rows1) >= 3

Cols1Cols1Cols1Cols1Cols1cols1 (input_control)  number-array HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Column coordinates of characteristic points in image 1.

Restriction: length(Cols1) == length(Rows1)

Rows2Rows2Rows2Rows2Rows2rows2 (input_control)  number-array HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Row coordinates of characteristic points in image 2.

Restriction: length(Rows2) >= 6 || length(Rows2) >= 3

Cols2Cols2Cols2Cols2Cols2cols2 (input_control)  number-array HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Column coordinates of characteristic points in image 2.

Restriction: length(Cols2) == length(Rows2)

CamPar1CamPar1CamPar1CamPar1CamPar1camPar1 (input_control)  number-array HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Parameters of the 1st camera.

CamPar2CamPar2CamPar2CamPar2CamPar2camPar2 (input_control)  number-array HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Parameters of the 2nd camera.

GrayMatchMethodGrayMatchMethodGrayMatchMethodGrayMatchMethodGrayMatchMethodgrayMatchMethod (input_control)  string HTupleHTupleHTupleVARIANTHtuple (string) (string) (HString) (char*) (BSTR) (char*)

Gray value comparison metric.

Default value: 'ssd' "ssd" "ssd" "ssd" "ssd" "ssd"

List of values: 'ncc'"ncc""ncc""ncc""ncc""ncc", 'sad'"sad""sad""sad""sad""sad", 'ssd'"ssd""ssd""ssd""ssd""ssd"

MaskSizeMaskSizeMaskSizeMaskSizeMaskSizemaskSize (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Size of gray value masks.

Default value: 10

Typical range of values: 3 ≤ MaskSize MaskSize MaskSize MaskSize MaskSize maskSize ≤ 15

Restriction: MaskSize >= 1

RowMoveRowMoveRowMoveRowMoveRowMoverowMove (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Average row coordinate shift of corresponding points.

Default value: 0

Typical range of values: 0 ≤ RowMove RowMove RowMove RowMove RowMove rowMove ≤ 200

ColMoveColMoveColMoveColMoveColMovecolMove (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Average column coordinate shift of corresponding points.

Default value: 0

Typical range of values: 0 ≤ ColMove ColMove ColMove ColMove ColMove colMove ≤ 200

RowToleranceRowToleranceRowToleranceRowToleranceRowTolerancerowTolerance (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Half height of matching search window.

Default value: 200

Typical range of values: 50 ≤ RowTolerance RowTolerance RowTolerance RowTolerance RowTolerance rowTolerance ≤ 200

Restriction: RowTolerance >= 1

ColToleranceColToleranceColToleranceColToleranceColTolerancecolTolerance (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Half width of matching search window.

Default value: 200

Typical range of values: 50 ≤ ColTolerance ColTolerance ColTolerance ColTolerance ColTolerance colTolerance ≤ 200

Restriction: ColTolerance >= 1

RotationRotationRotationRotationRotationrotation (input_control)  number(-array) HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Estimate of the relative orientation of the right image with respect to the left image.

Default value: 0.0

Suggested values: 0.0, 0.1, -0.1, 0.7854, 1.571, 3.142

MatchThresholdMatchThresholdMatchThresholdMatchThresholdMatchThresholdmatchThreshold (input_control)  number HTupleHTupleHTupleVARIANTHtuple (integer / real) (int / long / double) (Hlong / double) (Hlong / double) (Hlong / double) (Hlong / double)

Threshold for gray value matching.

Default value: 10

Suggested values: 10, 20, 50, 100, 0.9, 0.7

EstimationMethodEstimationMethodEstimationMethodEstimationMethodEstimationMethodestimationMethod (input_control)  string HTupleHTupleHTupleVARIANTHtuple (string) (string) (HString) (char*) (BSTR) (char*)

Algorithm for the computation of the relative pose and for special pose types.

Default value: 'normalized_dlt' "normalized_dlt" "normalized_dlt" "normalized_dlt" "normalized_dlt" "normalized_dlt"

List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard", 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt"

DistanceThresholdDistanceThresholdDistanceThresholdDistanceThresholdDistanceThresholddistanceThreshold (input_control)  number HTupleHTupleHTupleVARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Maximal deviation of a point from its epipolar line.

Default value: 1

Typical range of values: 0.5 ≤ DistanceThreshold DistanceThreshold DistanceThreshold DistanceThreshold DistanceThreshold distanceThreshold ≤ 5

Restriction: DistanceThreshold > 0

RandSeedRandSeedRandSeedRandSeedRandSeedrandSeed (input_control)  integer HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Seed for the random number generator.

Default value: 0

RelPoseRelPoseRelPoseRelPoseRelPoserelPose (output_control)  pose HPose, HTupleHTupleHTupleHPoseX, VARIANTHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong) (double / Hlong) (double / Hlong)

Computed relative orientation of the cameras (3D pose).

CovRelPoseCovRelPoseCovRelPoseCovRelPoseCovRelPosecovRelPose (output_control)  real-array HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

6x6 covariance matrix of the relative orientation.

ErrorErrorErrorErrorErrorerror (output_control)  real(-array) HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Root-Mean-Square of the epipolar distance error.

Points1Points1Points1Points1Points1points1 (output_control)  integer-array HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Indices of matched input points in image 1.

Points2Points2Points2Points2Points2points2 (output_control)  integer-array HTupleHTupleHTupleVARIANTHtuple (integer) (int / long) (Hlong) (Hlong) (Hlong) (Hlong)

Indices of matched input points in image 2.

Possible Predecessors

points_foerstnerpoints_foerstnerPointsFoerstnerpoints_foerstnerPointsFoerstnerPointsFoerstner, points_harrispoints_harrisPointsHarrispoints_harrisPointsHarrisPointsHarris

Possible Successors

vector_to_rel_posevector_to_rel_poseVectorToRelPosevector_to_rel_poseVectorToRelPoseVectorToRelPose, gen_binocular_rectification_mapgen_binocular_rectification_mapGenBinocularRectificationMapgen_binocular_rectification_mapGenBinocularRectificationMapGenBinocularRectificationMap

See also

binocular_calibrationbinocular_calibrationBinocularCalibrationbinocular_calibrationBinocularCalibrationBinocularCalibration, match_fundamental_matrix_ransacmatch_fundamental_matrix_ransacMatchFundamentalMatrixRansacmatch_fundamental_matrix_ransacMatchFundamentalMatrixRansacMatchFundamentalMatrixRansac, match_essential_matrix_ransacmatch_essential_matrix_ransacMatchEssentialMatrixRansacmatch_essential_matrix_ransacMatchEssentialMatrixRansacMatchEssentialMatrixRansac, create_posecreate_poseCreatePosecreate_poseCreatePoseCreatePose

References

Richard Hartley, Andrew Zisserman: “Multiple View Geometry in Computer Vision”; Cambridge University Press, Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple Images: The Laws That Govern the Formation of Multiple Images of a Scene and Some of Their Applications”; MIT Press, Cambridge, MA; 2001.

Module

3D Metrology


ClassesClassesClassesClasses | | | | Operators