Name
gen_binocular_rectification_mapT_gen_binocular_rectification_mapGenBinocularRectificationMapgen_binocular_rectification_mapGenBinocularRectificationMapGenBinocularRectificationMap — Generate transformation maps that describe the mapping of the images of a
binocular camera pair to a common rectified image plane.
gen_binocular_rectification_map( : Map1, Map2 : CamParam1, CamParam2, RelPose, SubSampling, Method, MapType : CamParamRect1, CamParamRect2, CamPoseRect1, CamPoseRect2, RelPoseRect)
Herror T_gen_binocular_rectification_map(Hobject* Map1, Hobject* Map2, const Htuple CamParam1, const Htuple CamParam2, const Htuple RelPose, const Htuple SubSampling, const Htuple Method, const Htuple MapType, Htuple* CamParamRect1, Htuple* CamParamRect2, Htuple* CamPoseRect1, Htuple* CamPoseRect2, Htuple* RelPoseRect)
Herror gen_binocular_rectification_map(Hobject* Map1, Hobject* Map2, const HTuple& CamParam1, const HTuple& CamParam2, const HTuple& RelPose, const HTuple& SubSampling, const HTuple& Method, const HTuple& MapType, HTuple* CamParamRect1, HTuple* CamParamRect2, HTuple* CamPoseRect1, HTuple* CamPoseRect2, HTuple* RelPoseRect)
HImage HImage::GenBinocularRectificationMap(HImage* Map2, const HTuple& CamParam1, const HTuple& CamParam2, const HTuple& RelPose, const HTuple& SubSampling, const HTuple& Method, const HTuple& MapType, HTuple* CamParamRect1, HTuple* CamParamRect2, HTuple* CamPoseRect1, HTuple* CamPoseRect2, HTuple* RelPoseRect)
void GenBinocularRectificationMap(HObject* Map1, HObject* Map2, const HTuple& CamParam1, const HTuple& CamParam2, const HTuple& RelPose, const HTuple& SubSampling, const HTuple& Method, const HTuple& MapType, HTuple* CamParamRect1, HTuple* CamParamRect2, HTuple* CamPoseRect1, HTuple* CamPoseRect2, HTuple* RelPoseRect)
HImage HImage::GenBinocularRectificationMap(const HTuple& CamParam1, const HTuple& CamParam2, const HPose& RelPose, double SubSampling, const HString& Method, const HString& MapType, HTuple* CamParamRect1, HTuple* CamParamRect2, HPose* CamPoseRect1, HPose* CamPoseRect2, HPose* RelPoseRect)
HImage HImage::GenBinocularRectificationMap(const HTuple& CamParam1, const HTuple& CamParam2, const HPose& RelPose, double SubSampling, const char* Method, const char* MapType, HTuple* CamParamRect1, HTuple* CamParamRect2, HPose* CamPoseRect1, HPose* CamPoseRect2, HPose* RelPoseRect)
HImage HPose::GenBinocularRectificationMap(HImage* Map2, const HTuple& CamParam1, const HTuple& CamParam2, double SubSampling, const HString& Method, const HString& MapType, HTuple* CamParamRect1, HTuple* CamParamRect2, HPose* CamPoseRect1, HPose* CamPoseRect2, HPose* RelPoseRect) const
HImage HPose::GenBinocularRectificationMap(HImage* Map2, const HTuple& CamParam1, const HTuple& CamParam2, double SubSampling, const char* Method, const char* MapType, HTuple* CamParamRect1, HTuple* CamParamRect2, HPose* CamPoseRect1, HPose* CamPoseRect2, HPose* RelPoseRect) const
void HOperatorSetX.GenBinocularRectificationMap(
[out] IHUntypedObjectX** Map1, [out] IHUntypedObjectX** Map2, [in] VARIANT CamParam1, [in] VARIANT CamParam2, [in] VARIANT RelPose, [in] VARIANT SubSampling, [in] VARIANT Method, [in] VARIANT MapType, [out] VARIANT* CamParamRect1, [out] VARIANT* CamParamRect2, [out] VARIANT* CamPoseRect1, [out] VARIANT* CamPoseRect2, [out] VARIANT* RelPoseRect)
IHImageX* HImageX.GenBinocularRectificationMap(
[in] VARIANT CamParam1, [in] VARIANT CamParam2, [in] VARIANT RelPose, [in] double SubSampling, [in] BSTR Method, [in] BSTR MapType, [out] VARIANT* CamParamRect1, [out] VARIANT* CamParamRect2, [out] VARIANT* CamPoseRect1, [out] VARIANT* CamPoseRect2, [out] VARIANT* RelPoseRect)
IHImageX* HPoseX.GenBinocularRectificationMap(
[out] IHImageX** Map2, [in] VARIANT CamParam1, [in] VARIANT CamParam2, [in] VARIANT RelPose, [in] double SubSampling, [in] BSTR Method, [in] BSTR MapType, [out] VARIANT* CamParamRect1, [out] VARIANT* CamParamRect2, [out] VARIANT* CamPoseRect1, [out] VARIANT* CamPoseRect2, [out] VARIANT* RelPoseRect)
static void HOperatorSet.GenBinocularRectificationMap(out HObject map1, out HObject map2, HTuple camParam1, HTuple camParam2, HTuple relPose, HTuple subSampling, HTuple method, HTuple mapType, out HTuple camParamRect1, out HTuple camParamRect2, out HTuple camPoseRect1, out HTuple camPoseRect2, out HTuple relPoseRect)
HImage HImage.GenBinocularRectificationMap(HTuple camParam1, HTuple camParam2, HPose relPose, double subSampling, string method, string mapType, out HTuple camParamRect1, out HTuple camParamRect2, out HPose camPoseRect1, out HPose camPoseRect2, out HPose relPoseRect)
HImage HPose.GenBinocularRectificationMap(out HImage map2, HTuple camParam1, HTuple camParam2, double subSampling, string method, string mapType, out HTuple camParamRect1, out HTuple camParamRect2, out HPose camPoseRect1, out HPose camPoseRect2, out HPose relPoseRect)
Given a pair of stereo images, rectification determines a
transformation of each image plane in a way that pairs of conjugate
epipolar lines become collinear and parallel to the horizontal image
axes. The rectified images can be thought of as acquired by
a new stereo rig, obtained by rotating and, in case of telecentric cameras,
translating the original cameras. The camera centers, respectively the
optical axes in the telecentric case, are maintained. For perspective cameras,
the image planes are additionally transformed in a common plane, which means
that the focal lengths are set equal, and the optical axes are parallel.
To achieve the transformation map for rectified images
gen_binocular_rectification_mapgen_binocular_rectification_mapGenBinocularRectificationMapgen_binocular_rectification_mapGenBinocularRectificationMapGenBinocularRectificationMap requires the internal camera
parameters CamParam1CamParam1CamParam1CamParam1CamParam1camParam1 of camera 1 and
CamParam2CamParam2CamParam2CamParam2CamParam2camParam2 of camera 2, as well as the relative
pose RelPoseRelPoseRelPoseRelPoseRelPoserelPose defining a point
transformation from camera 2 to camera 1. These parameters
can be obtained, e.g., from
the operator calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras.
The internal camera parameters, modified by the rectification, are
returned in CamParamRect1CamParamRect1CamParamRect1CamParamRect1CamParamRect1camParamRect1 for camera 1 and CamParamRect2CamParamRect2CamParamRect2CamParamRect2CamParamRect2camParamRect2
for camera 2, respectively. The rotation and, in case of telecentric cameras,
translation of the rectified camera in relation to
the original camera is specified by CamPoseRect1CamPoseRect1CamPoseRect1CamPoseRect1CamPoseRect1camPoseRect1 and
CamPoseRect2CamPoseRect2CamPoseRect2CamPoseRect2CamPoseRect2camPoseRect2, respectively. Finally, RelPoseRectRelPoseRectRelPoseRectRelPoseRectRelPoseRectrelPoseRect
returns the modified relative pose of the rectified camera system 2
in relation to the rectified camera system 1.
For perspective cameras, RelPoseRectRelPoseRectRelPoseRectRelPoseRectRelPoseRectrelPoseRect has only a translation in x.
Generally, the transformations are defined in a way that
the rectified camera 1 is left of the rectified camera 2. This means
that the optical center of camera 2 has a positive x coordinate of the
rectified coordinate system of camera 1.
The projection onto a common plane has many degrees of freedom which
are implicitly restricted by selecting a certain method in
MethodMethodMethodMethodMethodmethod (currently only one method available):
For telecentric cameras, the parameter MethodMethodMethodMethodMethodmethod is ignored.
The relative pose of both cameras is not uniquely defined in such a system,
since the cameras return identical images no matter how they are translated
along their optical axis. Yet, in order to define an absolute distance
measurement to the cameras, a standard position of both cameras is
considered. This position is defined as follows: Both cameras are translated
along their optical axes until their distance is one meter and until the line
between the cameras (baseline) forms the same angle with both optical
axes (i.e. the baseline and the optical axes form an isosceles triangle).
The optical axes remain unchanged.
The relative pose of the rectified cameras RelPoseRectRelPoseRectRelPoseRectRelPoseRectRelPoseRectrelPoseRect may be
different from the relative pose of the original cameras RelPoseRelPoseRelPoseRelPoseRelPoserelPose.
The mapping functions for the images of camera 1 and camera 2 are
returned in the images Map1Map1Map1Map1Map1map1 and Map2Map2Map2Map2Map2map2.
MapTypeMapTypeMapTypeMapTypeMapTypemapType is used to specify the type of the output maps.
If 'nearest_neighbor'"nearest_neighbor""nearest_neighbor""nearest_neighbor""nearest_neighbor""nearest_neighbor" is chosen, both maps consist of one image
containing one channel, in which for each pixel of the resulting image the
linearized coordinate of the pixel of the input image is stored that is the
nearest neighbor to the transformed coordinates. If 'bilinear'"bilinear""bilinear""bilinear""bilinear""bilinear"
interpolation is chosen, both maps consists of one image containing five
channels. In the first channel for each pixel in the resulting image the
linearized coordinates of the pixel in the input image is stored that is in
the upper left position relative to the transformed coordinates.
The four other channels contain the weights of the four neighboring pixels
of the transformed coordinates which are used for the bilinear interpolation,
in the following order:
+---+---+
| 2 | 3 |
+---+---+
| 4 | 5 |
+---+---+
The second channel, for example, contains the weights of the pixels that
lie to the upper left relative to the transformed coordinates.
If 'coordinate_map_sub_pix'"coordinate_map_sub_pix""coordinate_map_sub_pix""coordinate_map_sub_pix""coordinate_map_sub_pix""coordinate_map_sub_pix" is chosen, both maps consist of
one vector field image, in which for each pixel of the resulting image
the subpixel precise coordinates in the input image are stored.
The size and resolution of the maps and of the
transformed images can be adjusted by the SubSamplingSubSamplingSubSamplingSubSamplingSubSamplingsubSampling parameter
which applies a sub-sampling factor to the original images.
If you want to re-use the created map in another program, you can save it as
a multi-channel image with the operator write_imagewrite_imageWriteImagewrite_imageWriteImageWriteImage, using the format
'tiff'"tiff""tiff""tiff""tiff""tiff".
- Multithreading type: reentrant (runs in parallel with non-exclusive operators).
- Multithreading scope: global (may be called from any thread).
- Processed without parallelization.
Image containing the mapping data of camera 1.
Image containing the mapping data of camera 2.
Internal parameters of the projective camera 1.
Number of elements: CamParam1 == 8 || CamParam1 == 12
Internal parameters of the projective camera 2.
Number of elements: CamParam2 == 8 || CamParam2 == 12
Point transformation from camera 2 to camera 1.
Number of elements: 7
Factor of sub sampling.
Default value: 1.0
Suggested values: 0.5, 0.66, 1.0, 1.5, 2.0, 3.0, 4.0
Type of rectification.
Default value:
'geometric'
"geometric"
"geometric"
"geometric"
"geometric"
"geometric"
List of values: 'geometric'"geometric""geometric""geometric""geometric""geometric"
Type of mapping.
Default value:
'bilinear'
"bilinear"
"bilinear"
"bilinear"
"bilinear"
"bilinear"
List of values: 'bilinear'"bilinear""bilinear""bilinear""bilinear""bilinear", 'coord_map_sub_pix'"coord_map_sub_pix""coord_map_sub_pix""coord_map_sub_pix""coord_map_sub_pix""coord_map_sub_pix", 'nearest_neighbor'"nearest_neighbor""nearest_neighbor""nearest_neighbor""nearest_neighbor""nearest_neighbor"
Rectified internal parameters of the projective camera 1.
Number of elements: CamParamRect1 == 8 || CamParamRect1 == 12
Rectified internal parameters of the projective camera 2.
Number of elements: CamParamRect2 == 8 || CamParamRect2 == 12
Point transformation from the rectified camera 1 to
the original camera 1.
Number of elements: 7
Point transformation from the rectified camera 1 to
the original camera 1.
Number of elements: 7
Point transformation from the rectified camera 2 to
the rectified camera 1.
Number of elements: 7
* read the internal and external stereo parameters
read_cam_par ('cam_left.dat', CamParam1)
read_cam_par ('cam_right.dat', CamParam2)
read_pose ('relpos.dat', RelPose)
* compute the mapping for rectified images
gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose, \
1,'geometric', 'bilinear', CamParRect1, \
CamParamRect2, Cam1PoseRect1, \
Cam2PoseRect2, RelPoseRect)
* compute the disparities in online images
while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
grab_image_async (Image2, AcqHandle2, -1)
map_image (Image2, Map2, ImageMapped2)
binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, 'sad', \
11, 11, 20, -40, 20, 2, 25, 'left_right_check', \
'interpolation')
endwhile
* read the internal and external stereo parameters
read_cam_par ('cam_left.dat', CamParam1)
read_cam_par ('cam_right.dat', CamParam2)
read_pose ('relpos.dat', RelPose)
* compute the mapping for rectified images
gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose, \
1,'geometric', 'bilinear', CamParRect1, \
CamParamRect2, Cam1PoseRect1, \
Cam2PoseRect2, RelPoseRect)
* compute the disparities in online images
while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
grab_image_async (Image2, AcqHandle2, -1)
map_image (Image2, Map2, ImageMapped2)
binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, 'sad', \
11, 11, 20, -40, 20, 2, 25, 'left_right_check', \
'interpolation')
endwhile
* read the internal and external stereo parameters
read_cam_par ('cam_left.dat', CamParam1)
read_cam_par ('cam_right.dat', CamParam2)
read_pose ('relpos.dat', RelPose)
* compute the mapping for rectified images
gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose, \
1,'geometric', 'bilinear', CamParRect1, \
CamParamRect2, Cam1PoseRect1, \
Cam2PoseRect2, RelPoseRect)
* compute the disparities in online images
while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
grab_image_async (Image2, AcqHandle2, -1)
map_image (Image2, Map2, ImageMapped2)
binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, 'sad', \
11, 11, 20, -40, 20, 2, 25, 'left_right_check', \
'interpolation')
endwhile
// read the internal and external stereo parameters
read_cam_par("cam_left.dat",CamParam1);
read_cam_par("cam_right.dat",CamParam2);
read_pose("relpos.dat",RelPose);
// compute the mapping for rectified images
gen_binocular_rectification_map(&Map1,&Map2,CamParam1,CamParam2,RelPose,1,
"geometric","bilinear",&CamParamRect1,
&CamParamRect2,&CamPoseRect1,&CamPoseRect2,
&RelPosRect);
// compute the disparities in online images
while (1)
{
grab_image_async(&Image1,AcqHandle1,-1);
map_image(Image1,Map1,&ImageMapped1);
grab_image_async(&Image2,AcqHandle2,-1);
map_image(Image2,Map2,&ImageMapped2);
binocular_disparity(ImageMapped1,ImageMapped2,&Disparity,&Score,"sad",
11,11,20,-40,20,2,25,"left_right_check",
"interpolation");
}
* read the internal and external stereo parameters
read_cam_par ('cam_left.dat', CamParam1)
read_cam_par ('cam_right.dat', CamParam2)
read_pose ('relpos.dat', RelPose)
* compute the mapping for rectified images
gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose, \
1,'geometric', 'bilinear', CamParRect1, \
CamParamRect2, Cam1PoseRect1, \
Cam2PoseRect2, RelPoseRect)
* compute the disparities in online images
while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
grab_image_async (Image2, AcqHandle2, -1)
map_image (Image2, Map2, ImageMapped2)
binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, 'sad', \
11, 11, 20, -40, 20, 2, 25, 'left_right_check', \
'interpolation')
endwhile
* read the internal and external stereo parameters
read_cam_par ('cam_left.dat', CamParam1)
read_cam_par ('cam_right.dat', CamParam2)
read_pose ('relpos.dat', RelPose)
* compute the mapping for rectified images
gen_binocular_rectification_map (Map1, Map2, CamParam1, CamParam2, RelPose, \
1,'geometric', 'bilinear', CamParRect1, \
CamParamRect2, Cam1PoseRect1, \
Cam2PoseRect2, RelPoseRect)
* compute the disparities in online images
while (1)
grab_image_async (Image1, AcqHandle1, -1)
map_image (Image1, Map1, ImageMapped1)
grab_image_async (Image2, AcqHandle2, -1)
map_image (Image2, Map2, ImageMapped2)
binocular_disparity(ImageMapped1, ImageMapped2, Disparity, Score, 'sad', \
11, 11, 20, -40, 20, 2, 25, 'left_right_check', \
'interpolation')
endwhile
gen_binocular_rectification_mapgen_binocular_rectification_mapGenBinocularRectificationMapgen_binocular_rectification_mapGenBinocularRectificationMapGenBinocularRectificationMap returns 2 (H_MSG_TRUE) if all parameter values
are correct. If necessary, an exception is raised.
binocular_calibrationbinocular_calibrationBinocularCalibrationbinocular_calibrationBinocularCalibrationBinocularCalibration
map_imagemap_imageMapImagemap_imageMapImageMapImage
gen_image_to_world_plane_mapgen_image_to_world_plane_mapGenImageToWorldPlaneMapgen_image_to_world_plane_mapGenImageToWorldPlaneMapGenImageToWorldPlaneMap
map_imagemap_imageMapImagemap_imageMapImageMapImage,
gen_image_to_world_plane_mapgen_image_to_world_plane_mapGenImageToWorldPlaneMapgen_image_to_world_plane_mapGenImageToWorldPlaneMapGenImageToWorldPlaneMap,
contour_to_world_plane_xldcontour_to_world_plane_xldContourToWorldPlaneXldcontour_to_world_plane_xldContourToWorldPlaneXldContourToWorldPlaneXld,
image_points_to_world_planeimage_points_to_world_planeImagePointsToWorldPlaneimage_points_to_world_planeImagePointsToWorldPlaneImagePointsToWorldPlane
3D Metrology