Name
vector_to_rel_poseT_vector_to_rel_poseVectorToRelPosevector_to_rel_poseVectorToRelPoseVectorToRelPose — Compute the relative orientation between two cameras given image point
correspondences and known camera parameters and reconstruct 3D space points.
vector_to_rel_pose( : : Rows1, Cols1, Rows2, Cols2, CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2, CamPar1, CamPar2, Method : RelPose, CovRelPose, Error, X, Y, Z, CovXYZ)
Herror T_vector_to_rel_pose(const Htuple Rows1, const Htuple Cols1, const Htuple Rows2, const Htuple Cols2, const Htuple CovRR1, const Htuple CovRC1, const Htuple CovCC1, const Htuple CovRR2, const Htuple CovRC2, const Htuple CovCC2, const Htuple CamPar1, const Htuple CamPar2, const Htuple Method, Htuple* RelPose, Htuple* CovRelPose, Htuple* Error, Htuple* X, Htuple* Y, Htuple* Z, Htuple* CovXYZ)
Herror vector_to_rel_pose(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& CamPar1, const HTuple& CamPar2, const HTuple& Method, HTuple* RelPose, HTuple* CovRelPose, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ)
void VectorToRelPose(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& CamPar1, const HTuple& CamPar2, const HTuple& Method, HTuple* RelPose, HTuple* CovRelPose, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ)
HTuple HPose::VectorToRelPose(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& CamPar1, const HTuple& CamPar2, const HString& Method, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ)
HTuple HPose::VectorToRelPose(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& CamPar1, const HTuple& CamPar2, const HString& Method, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ)
HTuple HPose::VectorToRelPose(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& CamPar1, const HTuple& CamPar2, const char* Method, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* CovXYZ)
void HOperatorSetX.VectorToRelPose(
[in] VARIANT Rows1, [in] VARIANT Cols1, [in] VARIANT Rows2, [in] VARIANT Cols2, [in] VARIANT CovRR1, [in] VARIANT CovRC1, [in] VARIANT CovCC1, [in] VARIANT CovRR2, [in] VARIANT CovRC2, [in] VARIANT CovCC2, [in] VARIANT CamPar1, [in] VARIANT CamPar2, [in] VARIANT Method, [out] VARIANT* RelPose, [out] VARIANT* CovRelPose, [out] VARIANT* Error, [out] VARIANT* X, [out] VARIANT* Y, [out] VARIANT* Z, [out] VARIANT* CovXYZ)
VARIANT HPoseX.VectorToRelPose(
[in] VARIANT Rows1, [in] VARIANT Cols1, [in] VARIANT Rows2, [in] VARIANT Cols2, [in] VARIANT CovRR1, [in] VARIANT CovRC1, [in] VARIANT CovCC1, [in] VARIANT CovRR2, [in] VARIANT CovRC2, [in] VARIANT CovCC2, [in] VARIANT CamPar1, [in] VARIANT CamPar2, [in] BSTR Method, [out] VARIANT* CovRelPose, [out] VARIANT* Error, [out] VARIANT* X, [out] VARIANT* Y, [out] VARIANT* Z, [out] VARIANT* CovXYZ)
static void HOperatorSet.VectorToRelPose(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HTuple camPar1, HTuple camPar2, HTuple method, out HTuple relPose, out HTuple covRelPose, out HTuple error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)
HTuple HPose.VectorToRelPose(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HTuple camPar1, HTuple camPar2, string method, out HTuple error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)
HTuple HPose.VectorToRelPose(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HTuple camPar1, HTuple camPar2, string method, out double error, out HTuple x, out HTuple y, out HTuple z, out HTuple covXYZ)
For a stereo configuration with known camera parameters the geometric
relation between the two images is defined by the relative pose.
The operator vector_to_rel_posevector_to_rel_poseVectorToRelPosevector_to_rel_poseVectorToRelPoseVectorToRelPose computes the relative pose from
in general at least six point correspondences in the image pair.
RelPoseRelPoseRelPoseRelPoseRelPoserelPose indicates the relative pose of camera 1 with respect
to camera 2 (see create_posecreate_poseCreatePosecreate_poseCreatePoseCreatePose for more information about
poses and their representations.). This is in accordance with the
explicit calibration of a stereo setup using the operator
calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras.
Now, let R,t be the rotation and translation
of the relative pose. Then, the essential matrix
E is defined as E=([t]_x R)^T, where
[t]_x denotes the 3x3 skew-symmetric
matrix realising the cross product with the vector t.
The pose can be determined from the epipolar constraint:
T
/ X2 \ T / X1 \ / 0 -t_z t_y \
| Y2 | * ([t]_x R) * | Y1 | = 0 where [t]_x = | t_z 0 -t_x | .
\ 1 / \ 1 / \ -t_y t_x 0 /
Note, that the essential matrix is a projective entity and thus is
defined up to a scaling factor. From this follows that the
translation vector of the relative pose can only be determined up to
scale too. In fact, the computed translation vector will always be
normalized to unit length. As a consequence, a threedimensional
reconstruction of the scene, here in terms of points given by their
coordinates (XXXXXx,YYYYYy,ZZZZZz), can be carried
out only up to a single global scaling factor. If the absolute 3D
coordinates of the reconstruction are to be achieved the unknown
scaling factor can be computed from a gauge, which has to be visible
in both images. For example, a simple gauge can be given by any
known distance between points in the scene.
The operator vector_to_rel_posevector_to_rel_poseVectorToRelPosevector_to_rel_poseVectorToRelPoseVectorToRelPose is designed to deal with
a camera model that includes lens distortions. This is in constrast to the
operator vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrix, which encompasses only straight
line preserving cameras. The camera parameters are passed by the arguments
CamPar1CamPar1CamPar1CamPar1CamPar1camPar1, CamPar2CamPar2CamPar2CamPar2CamPar2camPar2. The
3D direction vectors (X1,Y1,1) and
(X2,Y2,1) are calculated from the point
coordinates (Rows1Rows1Rows1Rows1Rows1rows1,Cols1Cols1Cols1Cols1Cols1cols1) and
(Rows2Rows2Rows2Rows2Rows2rows2,Cols2Cols2Cols2Cols2Cols2cols2) by inverting the process of
projection (see calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras).
The point correspondences are typically determined by applying the operator
match_rel_pose_ransacmatch_rel_pose_ransacMatchRelPoseRansacmatch_rel_pose_ransacMatchRelPoseRansacMatchRelPoseRansac.
The parameter MethodMethodMethodMethodMethodmethod decides whether the relative orientation
between the cameras is of a special type and which algorithm is to be applied
for its computation.
If MethodMethodMethodMethodMethodmethod is either 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or
'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" the relative orientation is arbitrary.
Choosing 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard"
means that the relative motion between the cameras is a pure translation.
The typical application for this special motion case is the
scenario of a single fixed camera looking onto a moving conveyor belt.
In this case the minimum required number of corresponding points is just two
instead of six in the general case.
The relative pose is computed by a linear algorithm if
'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt" or 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt" is chosen.
With 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard"
the algorithm gives a statistically optimal result.
Here, 'normalized_dlt' and 'gold_standard' stand for
direct-linear-transformation and gold-standard-algorithm respectively.
All methods return the coordinates (XXXXXx,YYYYYy,ZZZZZz)
of the reconstructed 3D points. The optimal methods also return
the covariances of the 3D points in CovXYZCovXYZCovXYZCovXYZCovXYZcovXYZ.
Let n be the number of points
then the 3x3 covariance matrices are concatenated and
stored in a tuple of length 9n.
Additionally, the optimal methods return the 6x6 covariance
matrix of the pose CovRelPoseCovRelPoseCovRelPoseCovRelPoseCovRelPosecovRelPose.
If an optimal gold-standard-algorithm is chosen the covariances of the image
points (CovRR1CovRR1CovRR1CovRR1CovRR1covRR1, CovRC1CovRC1CovRC1CovRC1CovRC1covRC1, CovCC1CovCC1CovCC1CovCC1CovCC1covCC1, CovRR2CovRR2CovRR2CovRR2CovRR2covRR2,
CovRC2CovRC2CovRC2CovRC2CovRC2covRC2, CovCC2CovCC2CovCC2CovCC2CovCC2covCC2) can be incorporated in the computation.
They can be provided for example by the operator points_foerstnerpoints_foerstnerPointsFoerstnerpoints_foerstnerPointsFoerstnerPointsFoerstner.
If the point covariances are unknown, which is the default, empty tuples
are input. In this case the optimization algorithm internally assumes
uniform and equal covariances for all points.
The value ErrorErrorErrorErrorErrorerror indicates the overall quality of the optimization
process and is the root-mean-square euclidian distance in pixels between the
points and their corresponding epipolar lines.
For the operator vector_to_rel_posevector_to_rel_poseVectorToRelPosevector_to_rel_poseVectorToRelPoseVectorToRelPose a special configuration
of scene points and cameras exists: if all 3D points lie in a single plane
and additionally are all closer to one of the two cameras then the solution
in the relative pose is not unique but twofold. As a consequence both
solutions are computed and returned by the operator.
This means that all output parameters are of double length and the values
of the second solution are simply concatenated behind the values of the
first one.
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 6 || length(Rows1) >= 2
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
Row coordinate variance of the points in image 1.
Default value: []
Covariance of the points in image 1.
Default value: []
Column coordinate variance of the points in image 1.
Default value: []
Row coordinate variance of the points in image 2.
Default value: []
Covariance of the points in image 2.
Default value: []
Column coordinate variance of the points in image 2.
Default value: []
Camera parameters of the 1st camera.
Camera parameters of the 2nd camera.
Algorithm for the computation of the
relative pose and for special pose types.
Default value:
'normalized_dlt'
"normalized_dlt"
"normalized_dlt"
"normalized_dlt"
"normalized_dlt"
"normalized_dlt"
List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard", 'normalized_dlt'"normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt""normalized_dlt", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_normalized_dlt'"trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt""trans_normalized_dlt"
Computed relative orientation of the cameras (3D pose).
6x6 covariance matrix of the
relative camera orientation.
Root-Mean-Square of the epipolar distance error.
XXXXXx (output_control) real-array → HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)
X coordinates of the reconstructed 3D points.
YYYYYy (output_control) real-array → HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)
Y coordinates of the reconstructed 3D points.
ZZZZZz (output_control) real-array → HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)
Z coordinates of the reconstructed 3D points.
Covariance matrices of the reconstructed 3D points.
match_rel_pose_ransacmatch_rel_pose_ransacMatchRelPoseRansacmatch_rel_pose_ransacMatchRelPoseRansacMatchRelPoseRansac
gen_binocular_rectification_mapgen_binocular_rectification_mapGenBinocularRectificationMapgen_binocular_rectification_mapGenBinocularRectificationMapGenBinocularRectificationMap,
rel_pose_to_fundamental_matrixrel_pose_to_fundamental_matrixRelPoseToFundamentalMatrixrel_pose_to_fundamental_matrixRelPoseToFundamentalMatrixRelPoseToFundamentalMatrix
vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrix,
vector_to_fundamental_matrixvector_to_fundamental_matrixVectorToFundamentalMatrixvector_to_fundamental_matrixVectorToFundamentalMatrixVectorToFundamentalMatrix,
binocular_calibrationbinocular_calibrationBinocularCalibrationbinocular_calibrationBinocularCalibrationBinocularCalibration
camera_calibrationcamera_calibrationCameraCalibrationcamera_calibrationCameraCalibrationCameraCalibration
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in
Computer Vision”; Cambridge University Press, Cambridge; 2003.
J.Chris McGlone (editor): “Manual of Photogrammetry” ;
American Society for Photogrammetry and Remote Sensing ; 2004.
3D Metrology