Name
vector_to_fundamental_matrix_distortionT_vector_to_fundamental_matrix_distortionVectorToFundamentalMatrixDistortionvector_to_fundamental_matrix_distortionVectorToFundamentalMatrixDistortionVectorToFundamentalMatrixDistortion — Compute the fundamental matrix and the radial distortion coefficient
given a set of image point correspondences and reconstruct 3D
points.
vector_to_fundamental_matrix_distortion( : : Rows1, Cols1, Rows2, Cols2, CovRR1, CovRC1, CovCC1, CovRR2, CovRC2, CovCC2, ImageWidth, ImageHeight, Method : FMatrix, Kappa, Error, X, Y, Z, W)
Herror T_vector_to_fundamental_matrix_distortion(const Htuple Rows1, const Htuple Cols1, const Htuple Rows2, const Htuple Cols2, const Htuple CovRR1, const Htuple CovRC1, const Htuple CovCC1, const Htuple CovRR2, const Htuple CovRC2, const Htuple CovCC2, const Htuple ImageWidth, const Htuple ImageHeight, const Htuple Method, Htuple* FMatrix, Htuple* Kappa, Htuple* Error, Htuple* X, Htuple* Y, Htuple* Z, Htuple* W)
Herror vector_to_fundamental_matrix_distortion(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& ImageWidth, const HTuple& ImageHeight, const HTuple& Method, HTuple* FMatrix, HTuple* Kappa, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* W)
void VectorToFundamentalMatrixDistortion(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, const HTuple& ImageWidth, const HTuple& ImageHeight, const HTuple& Method, HTuple* FMatrix, HTuple* Kappa, HTuple* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* W)
double HHomMat2D::VectorToFundamentalMatrixDistortion(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, Hlong ImageWidth, Hlong ImageHeight, const HString& Method, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* W)
double HHomMat2D::VectorToFundamentalMatrixDistortion(const HTuple& Rows1, const HTuple& Cols1, const HTuple& Rows2, const HTuple& Cols2, const HTuple& CovRR1, const HTuple& CovRC1, const HTuple& CovCC1, const HTuple& CovRR2, const HTuple& CovRC2, const HTuple& CovCC2, Hlong ImageWidth, Hlong ImageHeight, const char* Method, double* Error, HTuple* X, HTuple* Y, HTuple* Z, HTuple* W)
void HOperatorSetX.VectorToFundamentalMatrixDistortion(
[in] VARIANT Rows1, [in] VARIANT Cols1, [in] VARIANT Rows2, [in] VARIANT Cols2, [in] VARIANT CovRR1, [in] VARIANT CovRC1, [in] VARIANT CovCC1, [in] VARIANT CovRR2, [in] VARIANT CovRC2, [in] VARIANT CovCC2, [in] VARIANT ImageWidth, [in] VARIANT ImageHeight, [in] VARIANT Method, [out] VARIANT* FMatrix, [out] VARIANT* Kappa, [out] VARIANT* Error, [out] VARIANT* X, [out] VARIANT* Y, [out] VARIANT* Z, [out] VARIANT* W)
double HHomMat2DX.VectorToFundamentalMatrixDistortion(
[in] VARIANT Rows1, [in] VARIANT Cols1, [in] VARIANT Rows2, [in] VARIANT Cols2, [in] VARIANT CovRR1, [in] VARIANT CovRC1, [in] VARIANT CovCC1, [in] VARIANT CovRR2, [in] VARIANT CovRC2, [in] VARIANT CovCC2, [in] Hlong ImageWidth, [in] Hlong ImageHeight, [in] BSTR Method, [out] double* Error, [out] VARIANT* X, [out] VARIANT* Y, [out] VARIANT* Z, [out] VARIANT* W)
static void HOperatorSet.VectorToFundamentalMatrixDistortion(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, HTuple imageWidth, HTuple imageHeight, HTuple method, out HTuple FMatrix, out HTuple kappa, out HTuple error, out HTuple x, out HTuple y, out HTuple z, out HTuple w)
double HHomMat2D.VectorToFundamentalMatrixDistortion(HTuple rows1, HTuple cols1, HTuple rows2, HTuple cols2, HTuple covRR1, HTuple covRC1, HTuple covCC1, HTuple covRR2, HTuple covRC2, HTuple covCC2, int imageWidth, int imageHeight, string method, out double error, out HTuple x, out HTuple y, out HTuple z, out HTuple w)
For a stereo configuration with unknown camera parameters, the
geometric relation between the two images is defined by the
fundamental matrix. vector_to_fundamental_matrix_distortionvector_to_fundamental_matrix_distortionVectorToFundamentalMatrixDistortionvector_to_fundamental_matrix_distortionVectorToFundamentalMatrixDistortionVectorToFundamentalMatrixDistortion
determines the fundamental matrix FMatrixFMatrixFMatrixFMatrixFMatrixFMatrix and the radial
distortion coefficient KappaKappaKappaKappaKappakappa from given
point correspondences (Rows1Rows1Rows1Rows1Rows1rows1,Cols1Cols1Cols1Cols1Cols1cols1),
(Rows2Rows2Rows2Rows2Rows2rows2,Cols2Cols2Cols2Cols2Cols2cols2) that fulfill the epipolar
constraint:
T
/ c2 \ / c1 \
| r2 | * FMatrix * | r1 | = 0 .
\ 1 / \ 1 /
Here, (r1,c1) and (r2,c2)
denote image points that are obtained by undistorting the input
image points with the division model (see
calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras):
r = r' / (1+Kappa*(r'^2+c'^2)
c = c' / (1+Kappa*(r'^2+c'^2)
Here, (r1',c1') =
(Rows1Rows1Rows1Rows1Rows1rows1-0.5*(ImageHeightImageHeightImageHeightImageHeightImageHeightimageHeight-1),
Cols1Cols1Cols1Cols1Cols1cols1-0.5*(ImageWidthImageWidthImageWidthImageWidthImageWidthimageWidth-1)) and (r2',c2') =
(Rows2Rows2Rows2Rows2Rows2rows2-0.5*(ImageHeightImageHeightImageHeightImageHeightImageHeightimageHeight-1),
Cols2Cols2Cols2Cols2Cols2cols2-0.5*(ImageWidthImageWidthImageWidthImageWidthImageWidthimageWidth-1)) denote the distorted
image points, specified relative to the image center. Thus,
vector_to_fundamental_matrix_distortionvector_to_fundamental_matrix_distortionVectorToFundamentalMatrixDistortionvector_to_fundamental_matrix_distortionVectorToFundamentalMatrixDistortionVectorToFundamentalMatrixDistortion assumes that the
principal point of the camera, i.e., the center of the radial
distortions, lies at the center of the image.
The returned KappaKappaKappaKappaKappakappa can be used to construct camera
parameters that can be used to rectify images or points (see
change_radial_distortion_cam_parchange_radial_distortion_cam_parChangeRadialDistortionCamParchange_radial_distortion_cam_parChangeRadialDistortionCamParChangeRadialDistortionCamPar,
change_radial_distortion_imagechange_radial_distortion_imageChangeRadialDistortionImagechange_radial_distortion_imageChangeRadialDistortionImageChangeRadialDistortionImage, and
change_radial_distortion_pointschange_radial_distortion_pointsChangeRadialDistortionPointschange_radial_distortion_pointsChangeRadialDistortionPointsChangeRadialDistortionPoints):
CamPar = [0.0,KappaKappaKappaKappaKappakappa,1.0,1.0,
0.5*(ImageWidthImageWidthImageWidthImageWidthImageWidthimageWidth-1),0.5*(ImageHeightImageHeightImageHeightImageHeightImageHeightimageHeight-1),
ImageWidthImageWidthImageWidthImageWidthImageWidthimageWidth,ImageHeightImageHeightImageHeightImageHeightImageHeightimageHeight]
Note the column/row ordering in the point coordinates above: since
the fundamental matrix encodes the projective relation between two
stereo images embedded in 3D space, the x/y notation must be
compliant with the camera coordinate system. Therefore, (x,y)
coordinates correspond to (column,row) pairs.
For a general relative orientation of the two cameras, the minimum
number of required point correspondences is nine. Then,
MethodMethodMethodMethodMethodmethod must be set to 'linear'"linear""linear""linear""linear""linear" or
'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard". If the left and right cameras are
identical and the relative orientation between them is a pure
translation, MethodMethodMethodMethodMethodmethod must be set to 'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear""trans_linear"
or 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard". In this special case, the
minimum number of correspondences is only four. The typical
application of the motion being a pure translation is a single fixed
camera looking onto a moving conveyor belt.
The fundamental matrix is determined by minimizing a cost function.
To minimize the respective error, different algorithms are
available, and the user can choose between the linear
('linear'"linear""linear""linear""linear""linear") and the gold-standard algorithm
('gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard"). Like the motion type, the algorithm can
be selected with the parameter MethodMethodMethodMethodMethodmethod. For MethodMethodMethodMethodMethodmethod
= 'linear'"linear""linear""linear""linear""linear" or 'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear""trans_linear", a linear algorithm
that minimizes an algebraic error based on the above epipolar
constraint is used. This algorithm is very fast. For the pure
translation case (MethodMethodMethodMethodMethodmethod = 'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear""trans_linear"), the
linear method returns accurate results for small to moderate noise
of the point coordinates and for most distortions (except for very
small distortions). For a general relative orientation of the two
cameras (MethodMethodMethodMethodMethodmethod = 'linear'"linear""linear""linear""linear""linear"), the linear method
only returns accurate results for very small noise of the point
coordinates and for sufficiently large distortions. For
MethodMethodMethodMethodMethodmethod = 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard" or
'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", a mathematically optimal but slower
optimization is used, which minimizes the geometric reprojection
error of reconstructed projective 3D points. In this case, in
addition to the fundamental matrix and the distortion coefficient,
the projective coordinates
(XXXXXx,YYYYYy,ZZZZZz,WWWWWw) of the reconstructed
points are returned. For a general relative orientation of the two
cameras, in general MethodMethodMethodMethodMethodmethod = 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard"
should be selected.
If an optimal gold-standard algorithm is chosen, the covariances of
the image points (CovRR1CovRR1CovRR1CovRR1CovRR1covRR1, CovRC1CovRC1CovRC1CovRC1CovRC1covRC1, CovCC1CovCC1CovCC1CovCC1CovCC1covCC1,
CovRR2CovRR2CovRR2CovRR2CovRR2covRR2, CovRC2CovRC2CovRC2CovRC2CovRC2covRC2, CovCC2CovCC2CovCC2CovCC2CovCC2covCC2) can be
incorporated into the computation. They can be provided, for
example, by the operator points_foerstnerpoints_foerstnerPointsFoerstnerpoints_foerstnerPointsFoerstnerPointsFoerstner. If the point
covariances are unknown, which is the default, empty tuples are
passed. In this case, the optimization algorithm internally assumes
uniform and equal covariances for all points.
The value ErrorErrorErrorErrorErrorerror indicates the overall quality of the
optimization procedure and is the mean symmetric euclidian distance
in pixels between the points and their corresponding epipolar lines.
If the correspondence between the points is not known,
match_fundamental_matrix_distortion_ransacmatch_fundamental_matrix_distortion_ransacMatchFundamentalMatrixDistortionRansacmatch_fundamental_matrix_distortion_ransacMatchFundamentalMatrixDistortionRansacMatchFundamentalMatrixDistortionRansac should be used
instead.
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
Input points in image 1 (row coordinate).
Restriction: length(Rows1) >= 9 || length(Rows1) >= 4
Input points in image 1 (column coordinate).
Restriction: length(Cols1) == length(Rows1)
Input points in image 2 (row coordinate).
Restriction: length(Rows2) == length(Rows1)
Input points in image 2 (column coordinate).
Restriction: length(Cols2) == length(Rows1)
Row coordinate variance of the points in image 1.
Default value: []
Covariance of the points in image 1.
Default value: []
Column coordinate variance of the points in image 1.
Default value: []
Row coordinate variance of the points in image 2.
Default value: []
Covariance of the points in image 2.
Default value: []
Column coordinate variance of the points in image 2.
Default value: []
Width of the images from which the points were
extracted.
Restriction: ImageWidth > 0
Height of the images from which the points were
extracted.
Restriction: ImageHeight > 0
Estimation algorithm.
Default value:
'gold_standard'
"gold_standard"
"gold_standard"
"gold_standard"
"gold_standard"
"gold_standard"
List of values: 'gold_standard'"gold_standard""gold_standard""gold_standard""gold_standard""gold_standard", 'linear'"linear""linear""linear""linear""linear", 'trans_gold_standard'"trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard""trans_gold_standard", 'trans_linear'"trans_linear""trans_linear""trans_linear""trans_linear""trans_linear"
Computed fundamental matrix.
Computed radial distortion coefficient.
Root-Mean-Square epipolar distance error.
XXXXXx (output_control) real-array → HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)
X coordinates of the reconstructed points in projective
3D space.
YYYYYy (output_control) real-array → HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)
Y coordinates of the reconstructed points in projective
3D space.
ZZZZZz (output_control) real-array → HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)
Z coordinates of the reconstructed points in projective
3D space.
WWWWWw (output_control) real-array → HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)
W coordinates of the reconstructed points in projective
3D space.
match_fundamental_matrix_distortion_ransacmatch_fundamental_matrix_distortion_ransacMatchFundamentalMatrixDistortionRansacmatch_fundamental_matrix_distortion_ransacMatchFundamentalMatrixDistortionRansacMatchFundamentalMatrixDistortionRansac
change_radial_distortion_cam_parchange_radial_distortion_cam_parChangeRadialDistortionCamParchange_radial_distortion_cam_parChangeRadialDistortionCamParChangeRadialDistortionCamPar,
change_radial_distortion_imagechange_radial_distortion_imageChangeRadialDistortionImagechange_radial_distortion_imageChangeRadialDistortionImageChangeRadialDistortionImage,
change_radial_distortion_pointschange_radial_distortion_pointsChangeRadialDistortionPointschange_radial_distortion_pointsChangeRadialDistortionPointsChangeRadialDistortionPoints,
gen_binocular_proj_rectificationgen_binocular_proj_rectificationGenBinocularProjRectificationgen_binocular_proj_rectificationGenBinocularProjRectificationGenBinocularProjRectification
vector_to_fundamental_matrixvector_to_fundamental_matrixVectorToFundamentalMatrixvector_to_fundamental_matrixVectorToFundamentalMatrixVectorToFundamentalMatrix,
vector_to_essential_matrixvector_to_essential_matrixVectorToEssentialMatrixvector_to_essential_matrixVectorToEssentialMatrixVectorToEssentialMatrix,
vector_to_rel_posevector_to_rel_poseVectorToRelPosevector_to_rel_poseVectorToRelPoseVectorToRelPose
calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras
Richard Hartley, Andrew Zisserman: “Multiple View Geometry in
Computer Vision”; Cambridge University Press, Cambridge; 2003.
Olivier Faugeras, Quang-Tuan Luong: “The Geometry of Multiple
Images: The Laws That Govern the Formation of Multiple Images of a
Scene and Some of Their Applications”; MIT Press, Cambridge, MA;
2001.
3D Metrology