ClassesClasses | | Operators

reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereo (Operator)

Name

reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereo — Reconstruct surface from calibrated multi-view stereo images.

Signature

reconstruct_surface_stereo(Images : : StereoModelID : ObjectModel3D)

Herror reconstruct_surface_stereo(const Hobject Images, const Hlong StereoModelID, Hlong* ObjectModel3D)

Herror T_reconstruct_surface_stereo(const Hobject Images, const Htuple StereoModelID, Htuple* ObjectModel3D)

void ReconstructSurfaceStereo(const HObject& Images, const HTuple& StereoModelID, HTuple* ObjectModel3D)

void HObjectModel3D::ReconstructSurfaceStereo(const HImage& Images, const HStereoModel& StereoModelID)

HObjectModel3D HStereoModel::ReconstructSurfaceStereo(const HImage& Images) const

static void HOperatorSet.ReconstructSurfaceStereo(HObject images, HTuple stereoModelID, out HTuple objectModel3D)

void HObjectModel3D.ReconstructSurfaceStereo(HImage images, HStereoModel stereoModelID)

HObjectModel3D HStereoModel.ReconstructSurfaceStereo(HImage images)

Description

The operator reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo reconstructs a surface from multiple ImagesImagesImagesImagesimages, acquired with a calibrated multi-view setup associated with a stereo model StereoModelIDStereoModelIDStereoModelIDStereoModelIDstereoModelID. The reconstructed surface is stored in the handle ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D.

Preparation and requirements

A summary of the preparation of a stereo model for surface reconstruction:

  1. Obtain calibrated camera setup model (use calibrate_camerascalibrate_camerasCalibrateCamerasCalibrateCamerasCalibrateCameras or create_camera_setup_modelcreate_camera_setup_modelCreateCameraSetupModelCreateCameraSetupModelCreateCameraSetupModel) and configure it.

  2. Create a stereo model with create_stereo_modelcreate_stereo_modelCreateStereoModelCreateStereoModelCreateStereoModel by selecting Method='surface_pairwise'"surface_pairwise""surface_pairwise""surface_pairwise""surface_pairwise" or 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion" (see 'Reconstruction algorithm').

  3. Configure the rectification parameters with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam and afterwards set the image pairs with set_stereo_model_image_pairsset_stereo_model_image_pairsSetStereoModelImagePairsSetStereoModelImagePairsSetStereoModelImagePairs.

  4. Configure the bounding box for the system with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam (GenParamName='bounding_box'"bounding_box""bounding_box""bounding_box""bounding_box").

  5. Configure parameters of pairwise reconstruction with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam.

  6. For models with Method='surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion" configure parameters of the fusion algorithm with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam.

  7. Acquire images with the calibrated cameras setup and collect them in an image array ImagesImagesImagesImagesimages.

  8. Perform surface reconstruction with reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo.

  9. Query and analyze intermediate results with get_stereo_model_objectget_stereo_model_objectGetStereoModelObjectGetStereoModelObjectGetStereoModelObject and get_stereo_model_object_model_3dget_stereo_model_object_model_3dGetStereoModelObjectModel3dGetStereoModelObjectModel3dGetStereoModelObjectModel3d.

  10. Readjust the parameters of the stereo model to improve the results with respect to quality and runtime with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam.

A camera setup model is associated with the stereo model StereoModelIDStereoModelIDStereoModelIDStereoModelIDstereoModelID upon its creation with create_stereo_modelcreate_stereo_modelCreateStereoModelCreateStereoModelCreateStereoModel. The camera setup must contain calibrated information about the cameras, with which the images in the image array ImagesImagesImagesImagesimages were acquired: the I-th image from the array corresponds to the camera with index I-1 from the camera setup; the number of images in the array must be the same as the number of cameras in the camera setup. The ImagesImagesImagesImagesimages must represent a static scene or they must be taken simultaneously, otherwise, the reconstruction of the surface might be impossible.

A well-calibrated camera setup is the main requirement for a precise surface reconstruction. Therefore, special attention should be paid to obtaining a precise calibration of the cameras in the multi-view stereo setup used. HALCON provides calibration of a multi-view setup with the operator calibrate_camerascalibrate_camerasCalibrateCamerasCalibrateCamerasCalibrateCameras. The resulting calibrated camera setup can be accessed with a successive call to get_calib_dataget_calib_dataGetCalibDataGetCalibDataGetCalibData. Alternatively, for camera setups with known parameters a calibrated camera setup can be created with create_camera_setup_modelcreate_camera_setup_modelCreateCameraSetupModelCreateCameraSetupModelCreateCameraSetupModel.

The proper selection of image pairs (see set_stereo_model_image_pairsset_stereo_model_image_pairsSetStereoModelImagePairsSetStereoModelImagePairsSetStereoModelImagePairs) has an important role for the general quality of the surface reconstruction. On the one hand, camera pairs with a small base line (small distance between the camera centers) are better suited for the binocular stereo disparity algorithms. Hence, pairs with small base lines should be preferred for acquiring accurate depth information of the scene. On the other hand, the pairs should provide different points of view, such that if one pair does not see a certain area of the surface, it is covered by another pair. Please note that the number of pairs linearly affects the runtime of the pairwise reconstruction. Therefore, use "as many as needed and just as few as possible" image pairs in order to handle the trade-off between completeness of the surface reconstruction and reconstruction runtime.

A bounding box is associated with the stereo model StereoModelIDStereoModelIDStereoModelIDStereoModelIDstereoModelID. For the surface stereo reconstruction, it is required that the bounding box is valid (see set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam for further details). The reconstruction algorithm needs the bounding box for three reasons:

Note that the method 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion" will try to produce a closed surface. If the object is only observed and reconstructed from one side, the far end of the bounding box usually determines where the object is cut off.

Setting parameters of pairwise reconstruction before setting parameters of fusion is essential since the pairwise reconstruction of the object is input for the fusion algorithm. For a description of parameters, see set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam. The choice of 'disparity_method'"disparity_method""disparity_method""disparity_method""disparity_method" has a major influence. The objects in the scene should expose certain surface properties in order to make the scene suitable for the dense surface reconstruction. First, the surface reflectance should exhibit Lambertian properties as closely as possible (i.e., light falling on the surface is scattered such that its apparent brightness is the same regardless of the angle of view). Secondly, the surface should exhibit enough texture, but no repeating patterns.

get_stereo_model_objectget_stereo_model_objectGetStereoModelObjectGetStereoModelObjectGetStereoModelObject can be used to view intermediate results, in particular rectified, disparity and score images. get_stereo_model_object_model_3dget_stereo_model_object_model_3dGetStereoModelObjectModel3dGetStereoModelObjectModel3dGetStereoModelObjectModel3d can be used to view the result of pairwise reconstruction for models with Method='surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion". See the paragraph "Troubleshooting for the configuration of a stereo model" on how to use the obtained results.

Reconstruction algorithm

The operator reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo performs multiple binocular stereo reconstructions and subsequently combines the results. The image pairs of this pairwise reconstruction are specified in StereoModelIDStereoModelIDStereoModelIDStereoModelIDstereoModelID as pairs of cameras of an associated calibrated multi-view setup.

For each image pair, the images are rectified before internally one of the operators binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity, binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgBinocularDisparityMg or binocular_disparity_msbinocular_disparity_msBinocularDisparityMsBinocularDisparityMsBinocularDisparityMs is called. The disparity information is then converted to points in the coordinate system of the from-camera by an internal call of disparity_image_to_xyzdisparity_image_to_xyzDisparityImageToXyzDisparityImageToXyzDisparityImageToXyz. In the next step, the points are transformed into the common coordinate system that is specified in the camera setup model associated with StereoModelIDStereoModelIDStereoModelIDStereoModelIDstereoModelID and stored in a common point cloud together with the points extracted from other image pairs.

'surface_pairwise'

If the stereo model is of type 'surface_pairwise'"surface_pairwise""surface_pairwise""surface_pairwise""surface_pairwise" (compare create_stereo_modelcreate_stereo_modelCreateStereoModelCreateStereoModelCreateStereoModel), the point cloud obtained as described above is directly returned in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D. For each point, the normal vector is calculated by fitting a plane through the neighboring 3D points. In contrast to surface_normals_object_model_3dsurface_normals_object_model_3dSurfaceNormalsObjectModel3dSurfaceNormalsObjectModel3dSurfaceNormalsObjectModel3d, the neighboring points are not determined in 3D but simply in 2D by using the neighboring points in the X, Y, and Z images. The normal vector of each 3D point is then set to the normal vector of the respective plane. Additionally, the score of the calculated disparity is attached to every reconstructed 3D point and stored as an extended attribute. Furthermore, transformed coordinate images can be sub-sampled. If only one image pair is processed and no point meshing is enabled, reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo stores a 'xyz_mapping' attribute in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D, which reveals the mapping of the reconstructed 3D points to coordinates of the first image of the pair. This attribute is required by operators like segment_object_model_3dsegment_object_model_3dSegmentObjectModel3dSegmentObjectModel3dSegmentObjectModel3d or object_model_3d_to_xyzobject_model_3d_to_xyzObjectModel3dToXyzObjectModel3dToXyzObjectModel3dToXyz (with Type='from_xyz_map'"from_xyz_map""from_xyz_map""from_xyz_map""from_xyz_map"). In contrast to the single pair case, if two or more image pairs are processed, reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo does not store the 'xyz_mapping' attribute since single reconstructed points would originate from different image pairs. The presence of the attribute in the output object model can be verified by calling get_object_model_3d_paramsget_object_model_3d_paramsGetObjectModel3dParamsGetObjectModel3dParamsGetObjectModel3dParams with GenParamName='has_xyz_mapping'"has_xyz_mapping""has_xyz_mapping""has_xyz_mapping""has_xyz_mapping".

The so-obtained point cloud can be additionally meshed in a post-processing step. The object model returned in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D then contains the description of the mesh. The used meshing algorithm depends on the type of the stereo model. For a stereo model of type 'surface_pairwise'"surface_pairwise""surface_pairwise""surface_pairwise""surface_pairwise", only a Poisson solver is supported which can be activated by setting the parameter 'point_meshing'"point_meshing""point_meshing""point_meshing""point_meshing" to 'poisson'"poisson""poisson""poisson""poisson". It creates a water-tight mesh, therefore surface regions with missing data are covered by an interpolated mesh.

'surface_fusion'

If the stereo model is of type 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion", the point cloud obtained as described above is processed further. The goal is to obtain a preferably smooth surface while keeping form fidelity. To this end, the bounding box is sampled and each sample point is assigned a distance to a so-called isosurface (consisting of points with distance 0). The final distance values (and thus the isosurface) are obtained by minimizing an error function based on the points resulting from pairwise reconstruction. This leads to a fusion of the reconstructed point clouds of all camera pairs (see the second paper in References below).

The calculation of the isosurface can be influenced by set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam with the parameters 'resolution'"resolution""resolution""resolution""resolution", 'surface_tolerance'"surface_tolerance""surface_tolerance""surface_tolerance""surface_tolerance", 'min_thickness'"min_thickness""min_thickness""min_thickness""min_thickness" and 'smoothing'"smoothing""smoothing""smoothing""smoothing". The distance between sample points in the bounding box (in each coordinate direction) can be set by the parameter 'resolution'"resolution""resolution""resolution""resolution". The parameter 'smoothing'"smoothing""smoothing""smoothing""smoothing" regulates the 'jumpiness' of the distance function by weighting the two terms in the error function: Fidelity to the initial point clouds obtained by pairwise reconstruction on the one hand, total variation of the distance function on the other hand. Note that the actual value of 'smoothing'"smoothing""smoothing""smoothing""smoothing" for a given data set to be visually pleasing has to be found by trial and error. Too small values lead to integrating many outliers into the surface even if the object surface then exhibits many jumps. Too large values lead to loss of fidelity towards the point cloud of pairwise reconstruction. Fidelity to the initial surfaces obtained by pairwise reconstruction is not maintained in the entire bounding box, but only in cones of sight of cameras to the initial surface. A sample point in such a cone is considered surely outside of the object (in front of the surface) or surely inside the object (behind the surface) with respect to the given camera if its distance to the initial surface exceeds a given value which can be set by the parameter 'surface_tolerance'"surface_tolerance""surface_tolerance""surface_tolerance""surface_tolerance". The length of considered cones behind the initial surface can roughly be set by the parameter 'min_thickness'"min_thickness""min_thickness""min_thickness""min_thickness" (see set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam for more details). 'min_thickness'"min_thickness""min_thickness""min_thickness""min_thickness" always has to be larger than or equal to 'surface_tolerance'"surface_tolerance""surface_tolerance""surface_tolerance""surface_tolerance".

image/svg+xml surface_tolerance min_thickness image/svg+xml
(1) (2)
The parameters 'surface_tolerance'"surface_tolerance""surface_tolerance""surface_tolerance""surface_tolerance" and 'min_thickness'"min_thickness""min_thickness""min_thickness""min_thickness" regulate the fidelity to the initial surface obtained by pairwise reconstruction. Points in a cone of sight of a camera are considered surely outside of the object (in front of the surface) or surely inside the object (behind the surface) with respect to the given camera if their distance to the initial surface exceeds 'surface_tolerance'"surface_tolerance""surface_tolerance""surface_tolerance""surface_tolerance". Points behind the surface (viewed from the given camera) are only considered to lie inside the object if their distance to the initial surface does not exceed 'min_thickness'"min_thickness""min_thickness""min_thickness""min_thickness".

Each 3D point of the object model returned in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D is extracted from the isosurface where the distance function equals zero. Its normal vector is calculated from the gradient of the distance function. While the method 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion" requires the setting of more parameters than simple pairwise reconstruction, postprocessing of the obtained point cloud representing the object surface will probably get a lot simpler. In particular, suppression of outliers, smoothing, equidistant sub-sampling and hole filling can be handled nicely and often in high quality by this method. The same can be said about the possible internal meshing of the output surface, see the next paragraph. Note that the algorithm will try to produce a closed surface. If the object is only observed and reconstructed from one side, the far end of the bounding box usually determines where the object is cut off. The method 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion" may take considerably longer than simple pairwise reconstruction, depending mainly on the parameter 'resolution'"resolution""resolution""resolution""resolution".

Additionally, the so-obtained point cloud can be meshed in a post-processing step. The object model returned in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D then contains the description of the mesh. For a stereo model of type 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion", the algorithm 'marching tetrahedra' is used which can be activated by setting the parameter 'point_meshing'"point_meshing""point_meshing""point_meshing""point_meshing" to 'isosurface'"isosurface""isosurface""isosurface""isosurface". The wanted meshed surface is extracted as the isosurface where the distance function equals zero. Note that there are more points in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D if meshing of the isosurface is enabled even if the used 'resolution'"resolution""resolution""resolution""resolution" is the same.

Coloring the 3D object model

It is possible to provide color information for 3D object models that have been reconstructed with reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo from the input images. The computation of the color depends on the chosen method set with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam (see explanation in the list there). Each 3D point is assigned a color value consisting of a red, green and blue channel which are stored as attributes named 'red'"red""red""red""red", 'green'"green""green""green""green" and 'blue'"blue""blue""blue""blue" in the output 3D object model ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D. These attributes can for example be used in the procedure visualize_object_model_3d with GenParamName = 'red_channel_attrib', 'green_channel_attrib' and 'blue_channel_attrib'. They can also be queried with get_object_model_3d_paramsget_object_model_3d_paramsGetObjectModel3dParamsGetObjectModel3dParamsGetObjectModel3dParams or be processed with select_points_object_model_3dselect_points_object_model_3dSelectPointsObjectModel3dSelectPointsObjectModel3dSelectPointsObjectModel3d or other operators that use extended attributes. If the reconstruction has been performed using gray value images, the color value for the three channels is identical. If multi-channel images are used, the reconstruction is performed using the first channel only. The remaining channels are solely used for the calculation of the color values.

If stereo models of type 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion" are used, the reconstruction will contain points without a direct correspondence to points in the images. These points are not seen by any of the cameras of the stereo system and are therefore "invisible". A color value for these points is derived by assigning the value of the nearest visible neighbor. Normally, the nearest neighbor search is not very time-consuming and can remain active. However, it may happen that the value for the parameter 'resolution'"resolution""resolution""resolution""resolution" is considerably finer than the available image resolution. In this case, many invisible 3D points are reconstructed making the nearest neighbor search very time consuming. In order to avoid an increased runtime, it is recommended to either adapt the value of 'resolution'"resolution""resolution""resolution""resolution" or to switch off the calculation for invisible points. This can be done by calling set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam with GenParamName='color_invisible'"color_invisible""color_invisible""color_invisible""color_invisible" and GenParamValue= 'false'"false""false""false""false". In this case, invisible points are assigned 255 as gray value.

Troubleshooting for the configuration of a stereo model

The proper configuration of a stereo model is not always easy. Please follow the workflow above. If the reconstruction results are not satisfactory, please consult the following hints and ideas:

Run in persistence mode

If you enable the 'persistence'"persistence""persistence""persistence""persistence" mode of stereo model (call set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam with GenParamName='persistence'"persistence""persistence""persistence""persistence") a successive call to reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo will store intermediate iconic results, which provide additional information. They can be accessed by get_stereo_model_object_model_3dget_stereo_model_object_model_3dGetStereoModelObjectModel3dGetStereoModelObjectModel3dGetStereoModelObjectModel3d and get_stereo_model_objectget_stereo_model_objectGetStereoModelObjectGetStereoModelObjectGetStereoModelObject.

Check the quality of the calibration

Inspect the used bounding box

Make sure that the bounding box is tight around the volume of interest. The algorithm uses the projection of the bounding box into both images of each image pair in order to estimate the values for MinDisparity and MaxDisparity, which in turn are used in the internal call to binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity and binocular_disparity_msbinocular_disparity_msBinocularDisparityMsBinocularDisparityMsBinocularDisparityMs. These values can be queried using get_stereo_model_paramget_stereo_model_paramGetStereoModelParamGetStereoModelParamGetStereoModelParam. In the case of using binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgBinocularDisparityMg as disparity method, suitable values for the parameters InitialGuess and InitialLevel are derived from the bounding box. However, these values can also be reset using set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam. Use the procedures gen_bounding_box_object_model_3d to create a 3D object model of your stereo model, and inspect it in conjunction with the reconstructed 3D object model to verify the bounding box visually.

Improve the quality of the disparity images

After setting the stereo model 'persistence'"persistence""persistence""persistence""persistence" mode (see above), inspect the disparity and the score images for each image pair. They are returned by get_stereo_model_objectget_stereo_model_objectGetStereoModelObjectGetStereoModelObjectGetStereoModelObject with a camera index pair [From, To] specifying the pair of interest in the parameter PairIndex and the values 'disparity_image'"disparity_image""disparity_image""disparity_image""disparity_image" and 'score_image'"score_image""score_image""score_image""score_image" in ObjectName, respectively. If both images exhibit significant imperfection (e.g., the disparity image does not really resemble the shape of the object seen in the image), try to adjust the parameters used for the internal call to binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity (the parameters with a 'binocular_'"binocular_""binocular_""binocular_""binocular_" prefix) by modifying set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam until some improvement is achieved.

Alternatively, a different method to calculate the disparities can be used. Besides the above-mentioned internal call of binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity, HALCON also provides the two other methods binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgBinocularDisparityMg and binocular_disparity_msbinocular_disparity_msBinocularDisparityMsBinocularDisparityMsBinocularDisparityMs. These methods feature e.g. the calculation of disparities in textureless regions at an expanse of the reconstruction time if compared with cross-correlation methods. However, for these methods, it can be necessary to adapt the parameters to the underlying dataset as well. Dependent on the chosen method, the user can either set the parameters with a 'binocular_mg_'"binocular_mg_""binocular_mg_""binocular_mg_""binocular_mg_" or a 'binocular_ms_'"binocular_ms_""binocular_ms_""binocular_ms_""binocular_ms_" prefix until some improvement is achieved.

A detailed description of the provided methods and their parameters can be found in binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity, binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgBinocularDisparityMg or binocular_disparity_msbinocular_disparity_msBinocularDisparityMsBinocularDisparityMsBinocularDisparityMs, respectively.

Fusion parameters

If the result of pairwise reconstruction as inspected by get_stereo_model_object_model_3dget_stereo_model_object_model_3dGetStereoModelObjectModel3dGetStereoModelObjectModel3dGetStereoModelObjectModel3d can not be improved anymore, begin to adapt the fusion parameters. For a description of parameters see also set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam. Note that the pairwise reconstruction is sometimes not discernible when the fusion algorithm can still tweak it into something sensible. In any case, pairwise reconstruction should yield enough points as input for the fusion algorithm.

Runtime

In order to improve the runtime, consider the following hints:

Extent of the bounding box

The bounding box should be tight around the volume of interest. Else, the runtime will increase unnecessarily and - for the method 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion" - drastically.

Sub-sampling in the rectification step

The stereo model parameter 'rectif_sub_sampling'"rectif_sub_sampling""rectif_sub_sampling""rectif_sub_sampling""rectif_sub_sampling" (see set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam) controls the sub-sampling in the rectification step. Setting this factor to a value > 1.0 will reduce the resolution of the rectified images compared to the original images. This factor has a direct impact on the succeeding performance of the chosen disparity method, but it causes loss of image detail. The parameter 'rectif_interpolation'"rectif_interpolation""rectif_interpolation""rectif_interpolation""rectif_interpolation" could have also some impact, but typically not a significant one.

Disparity parameters

There is a trade-off between completeness of the pairwise surface reconstruction on the one hand and reconstruction runtime on the other. The stereo model offers three different methods to calculate the disparity images. Dependent on the chosen method, the stereo model provides a particular set of parameters that enables a precise adaption of the method to the used dataset. If the method binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity is selected, only parameters with a 'binocular_'"binocular_""binocular_""binocular_""binocular_" prefix can be set. For the method binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgBinocularDisparityMg, all settable parameters have to exhibit the prefix 'binocular_mg_'"binocular_mg_""binocular_mg_""binocular_mg_""binocular_mg_", whereas for the method binocular_disparity_msbinocular_disparity_msBinocularDisparityMsBinocularDisparityMsBinocularDisparityMs only parameters with 'binocular_ms_'"binocular_ms_""binocular_ms_""binocular_ms_""binocular_ms_" are applicable.

Parameters using the method binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity

NumLevels,MaskWidth,MaskHeight, Filter,SubDisparity

Each of these parameters of binocular_disparitybinocular_disparityBinocularDisparityBinocularDisparityBinocularDisparity has a corresponding stereo model parameter with the prefix 'binocular_'"binocular_""binocular_""binocular_""binocular_", and has, some more or others less, impact on the performance. Adapting them properly could improve the performance.

Parameters using the method binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgBinocularDisparityMg

GrayConstancy, GradientConstancy, Smoothness, InitialGuess, MGSolver, MGCycleType, MGPreRelax, MGPostRelax, InitialLevel, Iterations, PyramidFactor

Each of these parameters of binocular_disparity_mgbinocular_disparity_mgBinocularDisparityMgBinocularDisparityMgBinocularDisparityMg has a corresponding stereo model parameter with the prefix 'binocular_mg_'"binocular_mg_""binocular_mg_""binocular_mg_""binocular_mg_", and has, some more or others less, impact on the performance and the result. Adapting them properly could improve the performance.

Parameters using the method binocular_disparity_msbinocular_disparity_msBinocularDisparityMsBinocularDisparityMsBinocularDisparityMs

SurfaceSmoothing, EdgeSmoothing, ConsistencyCheck, SimilarityMeasure, SubDisparity

Each of these parameters of binocular_disparity_msbinocular_disparity_msBinocularDisparityMsBinocularDisparityMsBinocularDisparityMs has a corresponding stereo model parameter with the prefix 'binocular_ms_'"binocular_ms_""binocular_ms_""binocular_ms_""binocular_ms_", and has, some more or others less, impact on the performance and the result. Adapting them properly could improve the performance.

Reconstruct only points with high disparity score

Besides adapting the sub-sampling it is also possible to exclude points of the 3D reconstruction because of their computed disparity score. In order to do this, the user should first query the score images for the disparity values by calling get_stereo_model_objectget_stereo_model_objectGetStereoModelObjectGetStereoModelObjectGetStereoModelObject using GenParamName = 'score_image'. Dependent on the distribution of these values, the user can decide whether disparities with a score beneath a certain threshold should be excluded from the reconstruction. This can be achieved with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam using either GenParamName = 'binocular_score_thresh'. The advantage of excluding points of the reconstruction is a slight speed-up since it is not necessary to process the entire dataset. As an alternative to the above-mentioned procedure, it is also possible to exclude points after executing reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo by filtering reconstructed 3D points. The advantage of this is that at the expense of a slightly increased runtime, a second call to reconstruct_surface_stereoreconstruct_surface_stereoReconstructSurfaceStereoReconstructSurfaceStereoReconstructSurfaceStereo is not necessary.

Sub-sampling of X,Y,Z data

For the method 'surface_pairwise'"surface_pairwise""surface_pairwise""surface_pairwise""surface_pairwise", you can use a larger sub-sampling step for the X,Y,Z data in the last step of the reconstruction algorithm by modifying GenParamName='sub_sampling_step'"sub_sampling_step""sub_sampling_step""sub_sampling_step""sub_sampling_step" with set_stereo_model_paramset_stereo_model_paramSetStereoModelParamSetStereoModelParamSetStereoModelParam. The reconstructed data will be much sparser, thus speeding up the post-processing.

Fusion parameters

For the method 'surface_fusion'"surface_fusion""surface_fusion""surface_fusion""surface_fusion", enlarging the parameter 'resolution'"resolution""resolution""resolution""resolution" will speed up the execution considerably.

Note that if a 3D object model is no longer needed or should be overwritten, the memory has to be freed first by calling the operator clear_object_model_3dclear_object_model_3dClearObjectModel3dClearObjectModel3dClearObjectModel3d.

Execution Information

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.

Parameters

ImagesImagesImagesImagesimages (input_object)  singlechannelimage-array objectHImageHImageHobject (byte)

An image array acquired by the camera setup associated with the stereo model.

StereoModelIDStereoModelIDStereoModelIDStereoModelIDstereoModelID (input_control)  stereo_model HStereoModel, HTupleHTupleHtuple (integer) (IntPtr) (Hlong) (Hlong)

Handle of the stereo model.

ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D (output_control)  object_model_3d HObjectModel3D, HTupleHTupleHtuple (integer) (IntPtr) (Hlong) (Hlong)

Handle to the resulting surface.

Possible Predecessors

create_stereo_modelcreate_stereo_modelCreateStereoModelCreateStereoModelCreateStereoModel, get_calib_dataget_calib_dataGetCalibDataGetCalibDataGetCalibData, set_stereo_model_image_pairsset_stereo_model_image_pairsSetStereoModelImagePairsSetStereoModelImagePairsSetStereoModelImagePairs

Possible Successors

get_stereo_model_object_model_3dget_stereo_model_object_model_3dGetStereoModelObjectModel3dGetStereoModelObjectModel3dGetStereoModelObjectModel3d

Alternatives

reconstruct_points_stereoreconstruct_points_stereoReconstructPointsStereoReconstructPointsStereoReconstructPointsStereo

References

M. Kazhdan, M. Bolitho, and H. Hoppe: “Poisson Surface Reconstruction.” Symposium on Geometry Processing (June 2006).,
C. Zach, T. Pock, and H. Bischof: “A globally optimal algorithm for robust TV-L1 range image integration.” Proceedings of IEEE International Conference on Computer Vision (ICCV 2007).

Module

3D Metrology


ClassesClasses | | Operators