find_surface_modelT_find_surface_modelFindSurfaceModelFindSurfaceModel (Operator)

Name

find_surface_modelT_find_surface_modelFindSurfaceModelFindSurfaceModel — Find the best matches of a surface model in a 3D scene.

Signature

find_surface_model( : : SurfaceModelID, ObjectModel3D, RelSamplingDistance, KeyPointFraction, MinScore, ReturnResultHandle, GenParamName, GenParamValue : Pose, Score, SurfaceMatchingResultID)

Herror T_find_surface_model(const Htuple SurfaceModelID, const Htuple ObjectModel3D, const Htuple RelSamplingDistance, const Htuple KeyPointFraction, const Htuple MinScore, const Htuple ReturnResultHandle, const Htuple GenParamName, const Htuple GenParamValue, Htuple* Pose, Htuple* Score, Htuple* SurfaceMatchingResultID)

void FindSurfaceModel(const HTuple& SurfaceModelID, const HTuple& ObjectModel3D, const HTuple& RelSamplingDistance, const HTuple& KeyPointFraction, const HTuple& MinScore, const HTuple& ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Pose, HTuple* Score, HTuple* SurfaceMatchingResultID)

HPoseArray HObjectModel3D::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, double RelSamplingDistance, double KeyPointFraction, const HTuple& MinScore, const HString& ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResultArray* SurfaceMatchingResultID) const

HPose HObjectModel3D::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, double RelSamplingDistance, double KeyPointFraction, double MinScore, const HString& ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResult* SurfaceMatchingResultID) const

HPose HObjectModel3D::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, double RelSamplingDistance, double KeyPointFraction, double MinScore, const char* ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResult* SurfaceMatchingResultID) const

HPose HObjectModel3D::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, double RelSamplingDistance, double KeyPointFraction, double MinScore, const wchar_t* ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResult* SurfaceMatchingResultID) const   (Windows only)

HPoseArray HSurfaceModel::FindSurfaceModel(const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, const HTuple& MinScore, const HString& ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResultArray* SurfaceMatchingResultID) const

HPose HSurfaceModel::FindSurfaceModel(const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, double MinScore, const HString& ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResult* SurfaceMatchingResultID) const

HPose HSurfaceModel::FindSurfaceModel(const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, double MinScore, const char* ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResult* SurfaceMatchingResultID) const

HPose HSurfaceModel::FindSurfaceModel(const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, double MinScore, const wchar_t* ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResult* SurfaceMatchingResultID) const   (Windows only)

static HPoseArray HSurfaceMatchingResult::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, const HTuple& MinScore, const HString& ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score, HSurfaceMatchingResultArray* SurfaceMatchingResultID)

HPose HSurfaceMatchingResult::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, double MinScore, const HString& ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score)

HPose HSurfaceMatchingResult::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, double MinScore, const char* ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score)

HPose HSurfaceMatchingResult::FindSurfaceModel(const HSurfaceModel& SurfaceModelID, const HObjectModel3D& ObjectModel3D, double RelSamplingDistance, double KeyPointFraction, double MinScore, const wchar_t* ReturnResultHandle, const HTuple& GenParamName, const HTuple& GenParamValue, HTuple* Score)   (Windows only)

static void HOperatorSet.FindSurfaceModel(HTuple surfaceModelID, HTuple objectModel3D, HTuple relSamplingDistance, HTuple keyPointFraction, HTuple minScore, HTuple returnResultHandle, HTuple genParamName, HTuple genParamValue, out HTuple pose, out HTuple score, out HTuple surfaceMatchingResultID)

HPose[] HObjectModel3D.FindSurfaceModel(HSurfaceModel surfaceModelID, double relSamplingDistance, double keyPointFraction, HTuple minScore, string returnResultHandle, HTuple genParamName, HTuple genParamValue, out HTuple score, out HSurfaceMatchingResult[] surfaceMatchingResultID)

HPose HObjectModel3D.FindSurfaceModel(HSurfaceModel surfaceModelID, double relSamplingDistance, double keyPointFraction, double minScore, string returnResultHandle, HTuple genParamName, HTuple genParamValue, out HTuple score, out HSurfaceMatchingResult surfaceMatchingResultID)

HPose[] HSurfaceModel.FindSurfaceModel(HObjectModel3D objectModel3D, double relSamplingDistance, double keyPointFraction, HTuple minScore, string returnResultHandle, HTuple genParamName, HTuple genParamValue, out HTuple score, out HSurfaceMatchingResult[] surfaceMatchingResultID)

HPose HSurfaceModel.FindSurfaceModel(HObjectModel3D objectModel3D, double relSamplingDistance, double keyPointFraction, double minScore, string returnResultHandle, HTuple genParamName, HTuple genParamValue, out HTuple score, out HSurfaceMatchingResult surfaceMatchingResultID)

static HPose[] HSurfaceMatchingResult.FindSurfaceModel(HSurfaceModel surfaceModelID, HObjectModel3D objectModel3D, double relSamplingDistance, double keyPointFraction, HTuple minScore, string returnResultHandle, HTuple genParamName, HTuple genParamValue, out HTuple score, out HSurfaceMatchingResult[] surfaceMatchingResultID)

HPose HSurfaceMatchingResult.FindSurfaceModel(HSurfaceModel surfaceModelID, HObjectModel3D objectModel3D, double relSamplingDistance, double keyPointFraction, double minScore, string returnResultHandle, HTuple genParamName, HTuple genParamValue, out HTuple score)

Description

The operator find_surface_modelfind_surface_modelFindSurfaceModelFindSurfaceModelFindSurfaceModel finds the best matches of the surface model SurfaceModelIDSurfaceModelIDSurfaceModelIDSurfaceModelIDsurfaceModelID in the 3D scene ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D and returns their pose in PosePosePosePosepose.

The matching is divided in three steps:

  1. Approximate matching

  2. Sparse pose refinement

  3. Dense pose refinement

These steps and their corresponding generic parameters are described in more detail in a separate paragraph below. The further paragraphs describe the parameters and mention points to note.

The matching process and the parameters can be visualized and inspected using the HDevelop procedure debug_find_surface_model.

Points to Note

Matching the surface model uses points and normals of the 3D scene ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D. For this, the scene shall provide one of the following options:

If the model was trained for edge-supported surface-based matching, only the second combination is possible, i.e., the scene must contain a 2D mapping. Further, for such models it is necessary that the normal vectors point inwards.

Note that triangles or polygons in the passed scene are ignored. Instead, only the vertices are used for matching. It is thus in general not recommended to use this operator on meshed scenes, such as CAD data. Instead, such a scene must be sampled beforehand using sample_object_model_3dsample_object_model_3dSampleObjectModel3dSampleObjectModel3dSampleObjectModel3d to create points and normals (e.g., using the method 'fast_compute_normals'"fast_compute_normals""fast_compute_normals""fast_compute_normals""fast_compute_normals").

When using noisy point clouds, e.g. from time-of-flight cameras, the generic parameter 'scene_normal_computation'"scene_normal_computation""scene_normal_computation""scene_normal_computation""scene_normal_computation" should be set to 'mls'"mls""mls""mls""mls" in order to obtain more robust results (see below).

Parameter Description

SurfaceModelIDSurfaceModelIDSurfaceModelIDSurfaceModelIDsurfaceModelID is the handle of the surface model. The model must have been created previously with create_surface_modelcreate_surface_modelCreateSurfaceModelCreateSurfaceModelCreateSurfaceModel or read in with read_surface_modelread_surface_modelReadSurfaceModelReadSurfaceModelReadSurfaceModel, respectively. Certain surface model parameters influencing the matching can be set using set_surface_model_paramset_surface_model_paramSetSurfaceModelParamSetSurfaceModelParamSetSurfaceModelParam, such as 'pose_restriction_max_angle_diff'"pose_restriction_max_angle_diff""pose_restriction_max_angle_diff""pose_restriction_max_angle_diff""pose_restriction_max_angle_diff" restricting the allowed range of rotations.

ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D is the handle of the 3D object model containing the scene in which the matches are searched. Note, it is assumed the scene was observed from a camera looking along the z-axis. This is important to align the scene normals if they are re-computed (see below).

The parameter RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance controls the sampling distance during the step Approximate matching and the ScoreScoreScoreScorescore calculation during the step Sparse pose refinement. Its value is given relative to the diameter of the surface model. Decreasing RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance leads to more sampled points, and in turn to a more stable but slower matching. Increasing RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance reduces the number of sampled scene points, which leads to a less stable but faster matching. For an illustration showing different values for RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance, please refer to the operator create_surface_modelcreate_surface_modelCreateSurfaceModelCreateSurfaceModelCreateSurfaceModel. The sampled scene points can be retrieved for a visual inspection using the operator get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult. For a robust matching it is recommended that at least 50-100 scene points are sampled for each object instance.

The parameter KeyPointFractionKeyPointFractionKeyPointFractionKeyPointFractionkeyPointFraction controls how many points out of the sampled scene points are selected as key points. For example, if the value is set to 0.1, 10% of the sampled scene points are used as key points. For stable results it is important that each instance of the object is covered by several key points. Increasing KeyPointFractionKeyPointFractionKeyPointFractionKeyPointFractionkeyPointFraction means that more key points are selected from the scene, resulting in a slower but more stable matching. Decreasing KeyPointFractionKeyPointFractionKeyPointFractionKeyPointFractionkeyPointFraction has the inverse effect and results in a faster but less stable matching. The operator get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult can be used to retrieve the selected key points for visual inspection.

The parameter MinScoreMinScoreMinScoreMinScoreminScore can be used to filter the results. Only matches with a score exceeding the value of MinScoreMinScoreMinScoreMinScoreminScore are returned. If MinScoreMinScoreMinScoreMinScoreminScore is set to zero, all matches are returned. For edged-supported surface-based matching (see create_surface_modelcreate_surface_modelCreateSurfaceModelCreateSurfaceModelCreateSurfaceModel) four different sub-scores are determined (see their explanation below). As a consequence, you can filter the results based on each of them by passing a tuple with up to four threshold values to MinScoreMinScoreMinScoreMinScoreminScore. These threshold values are sorted in the order of the scores (see below) and missing entries are regarded as 0, meaning no filtering based on this sub-score. To find suitable values for the thresholds, the corresponding sub-scores of found object instances can be obtained using get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult. Depending on the settings, not all sub-scores might be available. The thresholds for unavailable sub-scores are ignored. The four sub-scores, whose threshold values have to be passed in exactly this order in MinScoreMinScoreMinScoreMinScoreminScore, are:

  1. The overall score as returned in ScoreScoreScoreScorescore and through 'score'"score""score""score""score" by get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult.

  2. the surface fraction of the score, i.e., how much of the object's surface was detected in the scene, returned through 'score_surface'"score_surface""score_surface""score_surface""score_surface" by get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult.

  3. the 3D edge fraction of the score, i.e., how well the 3D edges of the object are aligned with the 3D edges detected in the scene returned through 'score_3d_edges'"score_3d_edges""score_3d_edges""score_3d_edges""score_3d_edges" by get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult

  4. the 2D edge fraction of the score, i.e, how well the object silhouette projected into the images aligns with edges detected in the images (available only for the operators find_surface_model_imagefind_surface_model_imageFindSurfaceModelImageFindSurfaceModelImageFindSurfaceModelImage and refine_surface_model_pose_imagerefine_surface_model_pose_imageRefineSurfaceModelPoseImageRefineSurfaceModelPoseImageRefineSurfaceModelPoseImage), returned through 'score_2d_edges'"score_2d_edges""score_2d_edges""score_2d_edges""score_2d_edges" by get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult.

The parameter ReturnResultHandleReturnResultHandleReturnResultHandleReturnResultHandlereturnResultHandle determines if a surface matching result handle is returned or not. If the parameter is set to 'true'"true""true""true""true", the handle is returned in the parameter SurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDsurfaceMatchingResultID. Additional details of the matching process can be queried with the operator get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult using that handle.

The parameters GenParamNameGenParamNameGenParamNameGenParamNamegenParamName and GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue are used to set generic parameters. Both get a tuple of equal length, where the tuple passed to GenParamNameGenParamNameGenParamNameGenParamNamegenParamName contains the names of the parameters to set, and the tuple passed to GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue contains the corresponding values. The possible parameter names and values are described in the paragraph The three steps of the matching.

The output parameter PosePosePosePosepose gives the 3D poses of the found object instances. For every found instance of the surface model its pose is given in the scene coordinate system, thus the pose is in the form , where scs denote the coordinate system of the scene (which often is identical with the coordinate system of the sensor, the camera coordinate system) and mcs the model coordinate system (which is a 3D world coordinate system), see Transformations / Poses and “Solution Guide III-C - 3D Vision”. Thereby, the pose refers to the original coordinate system of the 3D object model that was passed to create_surface_modelcreate_surface_modelCreateSurfaceModelCreateSurfaceModelCreateSurfaceModel.

The output parameter ScoreScoreScoreScorescore returns a score for each match. Its value and interpretation differs for the following cases:

The output parameter SurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDsurfaceMatchingResultID returns a handle for the surface matching result. Using this handle, additional details of the matching process can be queried with the operator get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult. Note, that in order to return the handle, ReturnResultHandleReturnResultHandleReturnResultHandleReturnResultHandlereturnResultHandle has to be set to 'true'"true""true""true""true".

The Three Steps of the Matching

The matching is divided into three steps:

1. Approximate matching

The approximate poses of the instances of the surface model in the scene are searched.

First, points are sampled uniformly from the scene passed in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D. The sampling distance is controlled with the parameter RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance.

Then, a set of key points is selected from the sampled scene points. The number of selected key points is controlled with the parameter KeyPointFractionKeyPointFractionKeyPointFractionKeyPointFractionkeyPointFraction.

For each selected key point, the optimum pose of the surface model is computed under the assumption that the key point lies on the surface of the object. This is done by pairing the key point with all other sampled scene points and finding the point pairs on the surface model that have a similar distance and relative orientation. The similarity is defined by the parameters 'feat_step_size_rel'"feat_step_size_rel""feat_step_size_rel""feat_step_size_rel""feat_step_size_rel" and 'feat_angle_resolution'"feat_angle_resolution""feat_angle_resolution""feat_angle_resolution""feat_angle_resolution" in create_surface_modelcreate_surface_modelCreateSurfaceModelCreateSurfaceModelCreateSurfaceModel. The pose for which the largest number of points from the sampled scene lie on the object is considered to be the best pose for this key point. The number of sampled scene points on the object is considered to be the score of the pose.

If the model was trained for edge-supported surface-based matching, edges are extracted from the 3D scene, similar to the operator edges_object_model_3dedges_object_model_3dEdgesObjectModel3dEdgesObjectModel3dEdgesObjectModel3d, and sampled. In addition to the sampled 3D surface, the reference points are then also paired with all sampled edge points and finding similar point-edge-combinations on the surface model. The score is then recomputed by multiplying the number of matching sampled edge points with the number of matching sampled scene points, and the best pose is extracted as described above.

From all key points the poses with the best scores are then selected and used as approximate poses. The maximum number of returned poses is set with the generic parameter 'num_matches'"num_matches""num_matches""num_matches""num_matches". If the pose refinement is disabled, the score described above is returned for each pose in ScoreScoreScoreScorescore. The value of the score depends on the amount of surface of the instance that is visible in the scene and on the sampling rate of the scene. Only poses whose score exceeds MinScoreMinScoreMinScoreMinScoreminScore are returned. To determine a good threshold for MinScoreMinScoreMinScoreMinScoreminScore, it is recommended to test the matching on several scenes.

Note that the resulting poses from this step are only approximate. The error in the pose is proportional to the sampling rates of the surface model given in create_surface_modelcreate_surface_modelCreateSurfaceModelCreateSurfaceModelCreateSurfaceModel, and is typically less than 5% of the object's diameter.

The following generic parameters control the approximate matching and can be set with GenParamNameGenParamNameGenParamNameGenParamNamegenParamName and GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue:

'num_matches'"num_matches""num_matches""num_matches""num_matches":

Sets the maximum number of matches that are returned.

Suggested values: 1, 2, 5

Default value: 1

Assertion: 'num_matches'"num_matches""num_matches""num_matches""num_matches" > 0

'max_overlap_dist_rel'"max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel":

For efficiency reasons, the maximum overlap can not be defined in 3D. Instead, only the minimum distance between the centers of the axis-aligned bounding boxes of two matches can be specified with 'max_overlap_dist_rel'. The value is set relative to the diameter of the object. Once an object with a high ScoreScoreScoreScorescore is found, all other matches are suppressed if the centers of their bounding boxes lie too close to the center of the first object. If the resulting matches must not overlap, the value for 'max_overlap_dist_rel' should be set to 1.0.

Note that only one of the parameters 'max_overlap_dist_rel'"max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel" and 'max_overlap_dist_abs'"max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs" should be set. If both are set, only the value of the last modified parameter is used.

Suggested values: 0.1, 0.5, 1

Default value: 0.5

Assertion: 'max_overlap_dist_rel'"max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel" >= 0

'max_overlap_dist_abs'"max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs":

This parameter has the same effect as the parameter 'max_overlap_dist_rel'. Note that in contrast to 'max_overlap_dist_rel', the value for 'max_overlap_dist_abs' is set as an absolute value. See 'max_overlap_dist_rel', above, for a description of the effect of this parameter.

Note that only one of the parameters 'max_overlap_dist_rel'"max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel" and 'max_overlap_dist_abs'"max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs" should be set. If both are set, only the value of the last modified parameter is used.

Suggested values: 1, 2, 3

Assertion: 'max_overlap_dist_abs'"max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs" >= 0

'scene_normal_computation'"scene_normal_computation""scene_normal_computation""scene_normal_computation""scene_normal_computation":

This parameter controls the normal computation of the sampled scene. In the default mode 'fast'"fast""fast""fast""fast", normals are computed based on a small neighborhood of points. If the 3D scene already contains normals, these are used. In the mode 'mls'"mls""mls""mls""mls", normals are computed based on a larger neighborhood and using the more complex but more accurate 'mls'"mls""mls""mls""mls" method. In this mode, the normals of the sampled scene are computed anew, regardless whether it already contains normals or not. A more detailed description of the 'mls'"mls""mls""mls""mls" method can be found in the description of the operator surface_normals_object_model_3dsurface_normals_object_model_3dSurfaceNormalsObjectModel3dSurfaceNormalsObjectModel3dSurfaceNormalsObjectModel3d. The 'mls'"mls""mls""mls""mls" mode is intended for noisy data, such as images from time-of-flight cameras. The (re-)computed normals are oriented as the original normals or such that in case no original normals exist. This orientation of implies the assumption that the scene was observed from a camera looking along the -axis.

Value list: 'fast'"fast""fast""fast""fast", 'mls'"mls""mls""mls""mls"

Default value: 'fast'"fast""fast""fast""fast"

'3d_edges'"3d_edges""3d_edges""3d_edges""3d_edges":

Allows to manually set the 3D scene edges for edge-supported surface-based matching, i.e. if the surface model was created with 'train_3d_edges'"train_3d_edges""train_3d_edges""train_3d_edges""train_3d_edges" enabled. The parameter must be a 3D object model handle. The edges are usually a result of the operator edges_object_model_3dedges_object_model_3dEdgesObjectModel3dEdgesObjectModel3dEdgesObjectModel3d but can further be filtered in order to remove outliers. If this parameter is not given, find_surface_modelfind_surface_modelFindSurfaceModelFindSurfaceModelFindSurfaceModel will internally extract the edges similar to the operator edges_object_model_3dedges_object_model_3dEdgesObjectModel3dEdgesObjectModel3dEdgesObjectModel3d.

'3d_edge_min_amplitude_rel'"3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel":

Sets the threshold when extracting 3D edges for edge-supported surface-based matching, i.e. if the surface model was created with 'train_3d_edges'"train_3d_edges""train_3d_edges""train_3d_edges""train_3d_edges" enabled. The threshold is set relative to the diameter of the object. Note that if edges were passed manually with the generic parameter '3d_edges'"3d_edges""3d_edges""3d_edges""3d_edges", this parameter is ignored. Otherwise, it behaves identically to the parameter 'MinAmplitude'"MinAmplitude""MinAmplitude""MinAmplitude""MinAmplitude" of operator edges_object_model_3dedges_object_model_3dEdgesObjectModel3dEdgesObjectModel3dEdgesObjectModel3d.

Suggested values: 0.05, 0.1, 0.5

Default value: 0.05

Assertion: '3d_edge_min_amplitude_rel'"3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel" >= 0

'3d_edge_min_amplitude_abs'"3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs":

Similar to '3d_edge_min_amplitude_rel'"3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel", however, the value is given as absolute distance and not relative to the object diameter.

Assertion: '3d_edge_min_amplitude_abs'"3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs" >= 0

'viewpoint'"viewpoint""viewpoint""viewpoint""viewpoint":

This parameter specifies the viewpoint from which the 3D data is seen. It is used to determine the viewing directions and edge directions as far as the surface model was created with 'train_3d_edges'"train_3d_edges""train_3d_edges""train_3d_edges""train_3d_edges" enabled. For this, GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue must contain a string consisting of the three coordinates (x, y, and z) of the viewpoint, separated by spaces. The viewpoint is defined in the same coordinate frame as ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D.

Note that if edges were passed manually with the generic parameter '3d_edges', this parameter is ignored. Otherwise, it behaves identically to the parameter GenParamNameGenParamNameGenParamNameGenParamNamegenParamName of the operator edges_object_model_3dedges_object_model_3dEdgesObjectModel3dEdgesObjectModel3dEdgesObjectModel3d when 'viewpoint'"viewpoint""viewpoint""viewpoint""viewpoint" is set.

To improve the result of the edge-supported surface-based matching the viewing point should roughly correspond to the position the scene was acquired from.

A visualization of the viewpoint can be created using the procedure debug_find_surface_model in order to inspect its position.

Default value: '0 0 0'"0 0 0""0 0 0""0 0 0""0 0 0"

'max_gap'"max_gap""max_gap""max_gap""max_gap":

Gaps in the 3D data are closed, as far as they do not exceed the maximum gap size 'max_gap'"max_gap""max_gap""max_gap""max_gap" [pixels] and the surface model was created with 'train_3d_edges' enabled. Larger gaps will contain edges at their boundary, while gaps smaller than this value will not. This suppresses edges around smaller patches that were not reconstructed by the sensor as well as edges at the more distant part of a discontinuity. For sensors with very large resolutions, the value should be increased to avoid spurious edges. Note that if edges were passed manually with the generic parameter '3d_edges', this parameter is ignored. Otherwise, it behaves identically to the parameter GenParamNameGenParamNameGenParamNameGenParamNamegenParamName of the operator edges_object_model_3dedges_object_model_3dEdgesObjectModel3dEdgesObjectModel3dEdgesObjectModel3d when 'max_gap'"max_gap""max_gap""max_gap""max_gap" is set.

The influence of 'max_gap'"max_gap""max_gap""max_gap""max_gap" can be inspected using the procedure debug_find_surface_model.

Default value: 30

'use_3d_edges'"use_3d_edges""use_3d_edges""use_3d_edges""use_3d_edges":

Turns the edge-supported matching on or off. This can be used to perform matching without 3D edges, even though the model was created for edge-supported matching. If the model was not created for edge-supported surface-based matching, this parameter has no effect.

Value list: 'true'"true""true""true""true", 'false'"false""false""false""false"

Default value: 'true'"true""true""true""true"

2. Sparse pose refinement

In this second step, the approximate poses found in the previous step are further refined. This increases the accuracy of the poses and the significance of the score value.

The sparse pose refinement uses the sampled scene points from the approximate matching. The pose is optimized such that the distances from the sampled scene points to the plane of the closest model point are minimal. The plane of each model point is defined as the plane perpendicular to its normal.

Additionally, if the model was trained for edge-supported surface-based matching and it was not disabled using the parameter 'use_3d_edges'"use_3d_edges""use_3d_edges""use_3d_edges""use_3d_edges" (see above), the pose is also optimized such that the sampled edge points in the scene align with the edges of the surface model.

The sparse pose refinement is enabled by default. It can be disabled by setting the generic parameter 'sparse_pose_refinement'"sparse_pose_refinement""sparse_pose_refinement""sparse_pose_refinement""sparse_pose_refinement" to 'false'"false""false""false""false". Since each key point produces one pose candidate, the total number of pose candidates to be optimized is proportional to the number of key points. For large scenes with much clutter, i.e., scene parts that do not belong to the object of interest, it can be faster to disable the sparse pose refinement.

The score of each pose is recomputed after the sparse pose refinement.

The following generic parameters control the sparse pose refinement and can be set with GenParamNameGenParamNameGenParamNameGenParamNamegenParamName and GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue:

'sparse_pose_refinement'"sparse_pose_refinement""sparse_pose_refinement""sparse_pose_refinement""sparse_pose_refinement":

Enables or disables the sparse pose refinement.

Value list: 'true'"true""true""true""true", 'false'"false""false""false""false"

Default value: 'true'"true""true""true""true"

'score_type'"score_type""score_type""score_type""score_type":

Set the type of the score that is returned. Several different scores can be computed and returned after the pose refinement. This parameter has no effect if both the sparse and the dense pose refinement are disabled.

Note that for the computation of the score after the sparse pose refinement, the sampled scene points are used. For the computation of the score after the dense pose refinement, all scene points are used (see below). The score value after the sparse pose refinement therefore depends on the sampling distance of the scene, RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance.

The following score types are supported:

'model_point_fraction'"model_point_fraction""model_point_fraction""model_point_fraction""model_point_fraction"

Without edge support, compute the surface fraction, i.e. the approximate fraction of the object's surface that is visible in the scene. This is done by counting the number of model points that have a correspondent scene point (as done for 'num_model_points'"num_model_points""num_model_points""num_model_points""num_model_points") and dividing this number by the total number of points on the model

0 <= Score <= 1

With edge support, compute the geometric mean of the surface fraction and the edge fraction. The edge fraction is the number of points from the sampled model edges that are aligned with edges of the scene, divided by the maximum number of potentially visible points of edges on the model. Note that if the edges are extracted from multiple viewpoints, this might lead to score greater than 1.

0 <= Score <= 1 (if the scene was acquired from one single viewpoint)

0 <= Score <= N (if the scene was merged from scenes that were acquired from N different viewpoints)

'num_model_points'"num_model_points""num_model_points""num_model_points""num_model_points":

Count the number of sampled model points that were detected in the scene. A model point is defined to be 'detected' if there is a scene point close to it. The returned score will be between zero and the number of points in the sampled model.

'num_scene_points'"num_scene_points""num_scene_points""num_scene_points""num_scene_points":

Compute a weighted count of the number of sampled scene points that lie on the surface of the found object. Each point is weighted based on the distance to the found object. This score is more accurate and stable than the score coming from the approximate matching. It depends on the sampling distance of the scene set in RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance. The returned score will be between zero and the number of points in the sampled scene.

Value list: 'model_point_fraction'"model_point_fraction""model_point_fraction""model_point_fraction""model_point_fraction", 'num_model_points'"num_model_points""num_model_points""num_model_points""num_model_points", 'num_scene_points'"num_scene_points""num_scene_points""num_scene_points""num_scene_points"

Default value: 'model_point_fraction'"model_point_fraction""model_point_fraction""model_point_fraction""model_point_fraction"

'pose_ref_use_scene_normals'"pose_ref_use_scene_normals""pose_ref_use_scene_normals""pose_ref_use_scene_normals""pose_ref_use_scene_normals":

Enables or disables the usage of scene normals for the pose refinement. If this parameter is enabled, and if the scene contains point normals, then those normals are used to increase the accuracy of the pose refinement. For this, the influence of scene points whose normal points in a different direction than the model normal is decreased. Note that the scene must contain point normals. Otherwise, this parameter is ignored.

Value list: 'true'"true""true""true""true", 'false'"false""false""false""false"

Default value: 'false'"false""false""false""false"

3. Dense pose refinement

Accurately refines the poses found in the previous steps. This step works similar to the sparse pose refinement and minimizes the distances between the scene points and the planes of the closest model points. The difference is that

  1. only the 'num_matches'"num_matches""num_matches""num_matches""num_matches" poses with the best scores from the previous step are refined;

  2. all points from the scene passed in ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D are used for the refinement.

  3. if the model was created for edge-supported surface-based matching and it was not disabled using the parameter 'use_3d_edges'"use_3d_edges""use_3d_edges""use_3d_edges""use_3d_edges" (see above), all extracted scene edge points are used for the refinement, instead of only the sampled edge points.

Taking all points from the scene increases the accuracy of the refinement but is slower than refining on the subsampled scene points. The dense pose refinement is enabled by default, but can be disabled with the generic parameter 'dense_pose_refinement'"dense_pose_refinement""dense_pose_refinement""dense_pose_refinement""dense_pose_refinement".

After the dense pose refinement, the score of each match is recomputed. The threshold for considering a point to be 'on' the object is set with the generic parameter 'pose_ref_scoring_dist_rel'"pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel" or 'pose_ref_scoring_dist_abs'"pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs" (see below). When using the edge-supported matching, the parameters 'pose_ref_scoring_dist_edges_rel'"pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel" or 'pose_ref_scoring_dist_edges_abs'"pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs" control the corresponding thresholds for edges.

The final accuracy of the refined pose depends on several factors. The internal refinement algorithm has an accuracy of up to 1e-7 times the size (diameter) of the model. This maximal accuracy is only achieved for best possible conditions. These further factors for the final accuracy are the shape of the model, the number of scene points, the noise of the scene points, the visible part of the object instance, and the position of the object.

The following generic parameters influence the accuracy and speed of the dense pose refinement and can be set with GenParamNameGenParamNameGenParamNameGenParamNamegenParamName and GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue:

'dense_pose_refinement'"dense_pose_refinement""dense_pose_refinement""dense_pose_refinement""dense_pose_refinement":

Enables or disables the dense pose refinement.

Value list: 'true'"true""true""true""true", 'false'"false""false""false""false"

Default value: 'true'"true""true""true""true"

'pose_ref_num_steps'"pose_ref_num_steps""pose_ref_num_steps""pose_ref_num_steps""pose_ref_num_steps":

Number of iterations for the dense pose refinement. Increasing the number of iteration leads to a more accurate pose at the expense of runtime. However, once convergence is reached, the accuracy can no longer be increased, even if the number of steps is increased. Note that this parameter is ignored if the dense pose refinement is disabled.

Suggested values: 1, 3, 5, 20

Default value: 5

Assertion: 'pose_ref_num_steps'"pose_ref_num_steps""pose_ref_num_steps""pose_ref_num_steps""pose_ref_num_steps" > 0

'pose_ref_sub_sampling'"pose_ref_sub_sampling""pose_ref_sub_sampling""pose_ref_sub_sampling""pose_ref_sub_sampling":

Set the rate of scene points to be used for the dense pose refinement. For example, if this value is set to 5, every 5th point from the scene is used for pose refinement. This parameter allows an easy trade-off between speed and accuracy of the pose refinement: Increasing the value leads to less points being used and in turn to a faster but less accurate pose refinement. Decreasing the value has the inverse effect. Note that this parameter is ignored if the dense pose refinement is disabled.

Suggested values: 1, 2, 5, 10

Default value: 2

Assertion: 'pose_ref_sub_sampling'"pose_ref_sub_sampling""pose_ref_sub_sampling""pose_ref_sub_sampling""pose_ref_sub_sampling" > 0

'pose_ref_dist_threshold_rel'"pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel":

Set the distance threshold for dense pose refinement relative to the diameter of the surface model. Only scene points that are closer to the object than this distance are used for the optimization. Scene points further away are ignored.

Note that only one of the parameters 'pose_ref_dist_threshold_rel'"pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel" and 'pose_ref_dist_threshold_abs'"pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs" should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled.

Suggested values: 0.03, 0.05, 0.1, 0.2

Default value: 0.1

Assertion: 0 < 'pose_ref_dist_threshold_rel'"pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel"

'pose_ref_dist_threshold_abs'"pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs":

Set the distance threshold for dense pose refinement as an absolute value. See 'pose_ref_dist_threshold_rel'"pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel" for a detailed description.

Note that only one of the parameters 'pose_ref_dist_threshold_rel'"pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel" and 'pose_ref_dist_threshold_abs'"pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs" should be set. If both are set, only the value of the modified last parameter is used.

Assertion: 0 < 'pose_ref_dist_threshold_abs'"pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs"

'pose_ref_scoring_dist_rel'"pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel":

Set the distance threshold for scoring relative to the diameter of the surface model. See the following 'pose_ref_scoring_dist_abs'"pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs" for a detailed description.

Note that only one of the parameters 'pose_ref_scoring_dist_rel'"pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel" and 'pose_ref_scoring_dist_abs'"pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs" should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled.

Suggested values: 0.2, 0.01, 0.005, 0.0001

Default value: 0.005

Assertion: 0 < 'pose_ref_scoring_dist_rel'"pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel"

'pose_ref_scoring_dist_abs'"pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs":

Set the distance threshold for scoring. Only scene points that are closer to the object than this distance are considered to be 'on the model' when computing the score after the pose refinement. All other scene points are considered not to be on the model. The value should correspond to the amount of noise on the coordinates of the scene points. Note that this parameter is ignored if the dense pose refinement is disabled.

Note that only one of the parameters 'pose_ref_scoring_dist_rel'"pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel" and 'pose_ref_scoring_dist_abs'"pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs" should be set. If both are set, only the value of the last modified parameter is used.

'score_type'"score_type""score_type""score_type""score_type":

Set the type of the score that is returned. Several different scores can be computed and returned after the pose refinement. This parameter has no effect if both the sparse and the dense pose refinement are disabled.

The values 'pose_ref_scoring_dist_rel'"pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel" and 'pose_ref_scoring_dist_abs'"pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs" determine for all score types (except the edge fraction scores for edge-supported surface-based matching), how close a scene point needs to be to a model point in order to classify it as being on the model.

Note that for the computation of the score after the sparse pose refinement, the sampled scene points are used (see above). For the computation of the score after the dense pose refinement, all scene points are used. The score value after the dense pose refinement does therefore not depend on the sampling distance of the scene.

More details about the different score types can be found above in the description of the sparse pose refinement.

Value list: 'model_point_fraction'"model_point_fraction""model_point_fraction""model_point_fraction""model_point_fraction", 'num_model_points'"num_model_points""num_model_points""num_model_points""num_model_points", 'num_scene_points'"num_scene_points""num_scene_points""num_scene_points""num_scene_points"

Default value: 'model_point_fraction'"model_point_fraction""model_point_fraction""model_point_fraction""model_point_fraction"

'pose_ref_use_scene_normals'"pose_ref_use_scene_normals""pose_ref_use_scene_normals""pose_ref_use_scene_normals""pose_ref_use_scene_normals":

Enables or disables the usage of scene normals for the pose refinement. This parameter is explained in more details in the section Sparse pose refinment above.

Value list: 'true'"true""true""true""true", 'false'"false""false""false""false"

Default value: 'false'"false""false""false""false"

'pose_ref_dist_threshold_edges_rel'"pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel":

Set the distance threshold of edges for dense pose refinement relative to the diameter of the surface model. Only scene edges that are closer to the object edges than this distance are used for the optimization. Scene edges further away are ignored.

Note that only one of the parameters 'pose_ref_dist_threshold_edges_rel'"pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel" and 'pose_ref_dist_threshold_edges_abs'"pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs" should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.

Suggested values: 0.03, 0.05, 0.1, 0.2

Default value: 0.1

Assertion: 0 < 'pose_ref_dist_threshold_edges_rel'"pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel"

'pose_ref_dist_threshold_edges_abs'"pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs":

Set the distance threshold of edges for dense pose refinement as an absolute value. See 'pose_ref_dist_threshold_edges_rel'"pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel" for a detailed description.

Note that only one of the parameters 'pose_ref_dist_threshold_edges_rel'"pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel" and 'pose_ref_dist_threshold_edges_abs'"pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs" should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.

Assertion: 0 < 'pose_ref_dist_threshold_edges_abs'"pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs"

'pose_ref_scoring_dist_edges_rel'"pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel":

Set the distance threshold of edges for scoring relative to the diameter of the surface model. See the following 'pose_ref_scoring_dist_edges_abs'"pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs" for a detailed description.

Note that only one of the parameters 'pose_ref_scoring_dist_edges_rel'"pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel" and 'pose_ref_scoring_dist_edges_abs'"pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs" should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.

Suggested values: 0.2, 0.01, 0.005, 0.0001

Default value: 0.005

Assertion: 0 < 'pose_ref_scoring_dist_edges_rel'"pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel"

'pose_ref_scoring_dist_edges_abs'"pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs":

Set the distance threshold of edges for scoring as an absolute value. Only scene edges that are closer to the object edges than this distance are considered to be 'on the model' when computing the score after the pose refinement. All other scene edges are considered not to be on the model. The value should correspond to the expected inaccuracy of the extracted scene edges and the inaccuracy of the refined pose.

Note that only one of the parameters 'pose_ref_scoring_dist_edges_rel'"pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel" and 'pose_ref_scoring_dist_edges_abs'"pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs" should be set. If both are set, only the value of the last modified parameter is used. Note that this parameter is ignored if the dense pose refinement is disabled or if no edge-supported surface-based matching is used.

Assertion: 0 < 'pose_ref_scoring_dist_edges_abs'"pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs"

Execution Information

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.

This operator supports cancelling timeouts and interrupts.

Parameters

SurfaceModelIDSurfaceModelIDSurfaceModelIDSurfaceModelIDsurfaceModelID (input_control)  surface_model HSurfaceModel, HTupleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the surface model.

ObjectModel3DObjectModel3DObjectModel3DObjectModel3DobjectModel3D (input_control)  object_model_3d HObjectModel3D, HTupleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the 3D object model containing the scene.

RelSamplingDistanceRelSamplingDistanceRelSamplingDistanceRelSamplingDistancerelSamplingDistance (input_control)  real HTupleHTupleHtuple (real) (double) (double) (double)

Scene sampling distance relative to the diameter of the surface model.

Default value: 0.05

Suggested values: 0.1, 0.07, 0.05, 0.04, 0.03

Restriction: 0 < RelSamplingDistance < 1

KeyPointFractionKeyPointFractionKeyPointFractionKeyPointFractionkeyPointFraction (input_control)  real HTupleHTupleHtuple (real) (double) (double) (double)

Fraction of sampled scene points used as key points.

Default value: 0.2

Suggested values: 0.3, 0.2, 0.1, 0.05

Restriction: 0 < KeyPointFraction <= 1

MinScoreMinScoreMinScoreMinScoreminScore (input_control)  real(-array) HTupleHTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

Minimum score of the returned poses.

Default value: 0

Restriction: MinScore >= 0

ReturnResultHandleReturnResultHandleReturnResultHandleReturnResultHandlereturnResultHandle (input_control)  string HTupleHTupleHtuple (string) (string) (HString) (char*)

Enable returning a result handle in SurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDsurfaceMatchingResultID.

Default value: 'false' "false" "false" "false" "false"

Suggested values: 'true'"true""true""true""true", 'false'"false""false""false""false"

GenParamNameGenParamNameGenParamNameGenParamNamegenParamName (input_control)  attribute.name-array HTupleHTupleHtuple (string) (string) (HString) (char*)

Names of the generic parameters.

Default value: []

List of values: '3d_edge_min_amplitude_abs'"3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs""3d_edge_min_amplitude_abs", '3d_edge_min_amplitude_rel'"3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel""3d_edge_min_amplitude_rel", '3d_edges'"3d_edges""3d_edges""3d_edges""3d_edges", 'dense_pose_refinement'"dense_pose_refinement""dense_pose_refinement""dense_pose_refinement""dense_pose_refinement", 'max_gap'"max_gap""max_gap""max_gap""max_gap", 'max_overlap_dist_abs'"max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs""max_overlap_dist_abs", 'max_overlap_dist_rel'"max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel""max_overlap_dist_rel", 'num_matches'"num_matches""num_matches""num_matches""num_matches", 'pose_ref_dist_threshold_abs'"pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs""pose_ref_dist_threshold_abs", 'pose_ref_dist_threshold_edges_abs'"pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs""pose_ref_dist_threshold_edges_abs", 'pose_ref_dist_threshold_edges_rel'"pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel""pose_ref_dist_threshold_edges_rel", 'pose_ref_dist_threshold_rel'"pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel""pose_ref_dist_threshold_rel", 'pose_ref_num_steps'"pose_ref_num_steps""pose_ref_num_steps""pose_ref_num_steps""pose_ref_num_steps", 'pose_ref_scoring_dist_abs'"pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs""pose_ref_scoring_dist_abs", 'pose_ref_scoring_dist_edges_abs'"pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs""pose_ref_scoring_dist_edges_abs", 'pose_ref_scoring_dist_edges_rel'"pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel""pose_ref_scoring_dist_edges_rel", 'pose_ref_scoring_dist_rel'"pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel""pose_ref_scoring_dist_rel", 'pose_ref_sub_sampling'"pose_ref_sub_sampling""pose_ref_sub_sampling""pose_ref_sub_sampling""pose_ref_sub_sampling", 'pose_ref_use_scene_normals'"pose_ref_use_scene_normals""pose_ref_use_scene_normals""pose_ref_use_scene_normals""pose_ref_use_scene_normals", 'scene_normal_computation'"scene_normal_computation""scene_normal_computation""scene_normal_computation""scene_normal_computation", 'score_type'"score_type""score_type""score_type""score_type", 'sparse_pose_refinement'"sparse_pose_refinement""sparse_pose_refinement""sparse_pose_refinement""sparse_pose_refinement", 'use_3d_edges'"use_3d_edges""use_3d_edges""use_3d_edges""use_3d_edges", 'viewpoint'"viewpoint""viewpoint""viewpoint""viewpoint"

GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue (input_control)  attribute.value-array HTupleHTupleHtuple (string / real / integer) (string / double / int / long) (HString / double / Hlong) (char* / double / Hlong)

Values of the generic parameters.

Default value: []

Suggested values: 0, 1, 'true'"true""true""true""true", 'false'"false""false""false""false", 0.005, 0.01, 0.03, 0.05, 0.1, 'num_scene_points'"num_scene_points""num_scene_points""num_scene_points""num_scene_points", 'model_point_fraction'"model_point_fraction""model_point_fraction""model_point_fraction""model_point_fraction", 'num_model_points'"num_model_points""num_model_points""num_model_points""num_model_points", 'fast'"fast""fast""fast""fast", 'mls'"mls""mls""mls""mls"

PosePosePosePosepose (output_control)  pose(-array) HPose, HTupleHTupleHtuple (real / integer) (double / int / long) (double / Hlong) (double / Hlong)

3D pose of the surface model in the scene.

ScoreScoreScoreScorescore (output_control)  real-array HTupleHTupleHtuple (real) (double) (double) (double)

Score of the found instances of the surface model.

SurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDSurfaceMatchingResultIDsurfaceMatchingResultID (output_control)  surface_matching_result(-array) HSurfaceMatchingResult, HTupleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the matching result, if enabled in ReturnResultHandleReturnResultHandleReturnResultHandleReturnResultHandlereturnResultHandle.

Result

find_surface_modelfind_surface_modelFindSurfaceModelFindSurfaceModelFindSurfaceModel returns 2 (H_MSG_TRUE) if all parameters are correct. If necessary, an exception is raised.

Possible Predecessors

read_object_model_3dread_object_model_3dReadObjectModel3dReadObjectModel3dReadObjectModel3d, xyz_to_object_model_3dxyz_to_object_model_3dXyzToObjectModel3dXyzToObjectModel3dXyzToObjectModel3d, get_object_model_3d_paramsget_object_model_3d_paramsGetObjectModel3dParamsGetObjectModel3dParamsGetObjectModel3dParams, read_surface_modelread_surface_modelReadSurfaceModelReadSurfaceModelReadSurfaceModel, create_surface_modelcreate_surface_modelCreateSurfaceModelCreateSurfaceModelCreateSurfaceModel, get_surface_model_paramget_surface_model_paramGetSurfaceModelParamGetSurfaceModelParamGetSurfaceModelParam, edges_object_model_3dedges_object_model_3dEdgesObjectModel3dEdgesObjectModel3dEdgesObjectModel3d

Possible Successors

refine_surface_model_poserefine_surface_model_poseRefineSurfaceModelPoseRefineSurfaceModelPoseRefineSurfaceModelPose, get_surface_matching_resultget_surface_matching_resultGetSurfaceMatchingResultGetSurfaceMatchingResultGetSurfaceMatchingResult, clear_surface_matching_resultclear_surface_matching_resultClearSurfaceMatchingResultClearSurfaceMatchingResultClearSurfaceMatchingResult, clear_object_model_3dclear_object_model_3dClearObjectModel3dClearObjectModel3dClearObjectModel3d

Alternatives

refine_surface_model_poserefine_surface_model_poseRefineSurfaceModelPoseRefineSurfaceModelPoseRefineSurfaceModelPose, find_surface_model_imagefind_surface_model_imageFindSurfaceModelImageFindSurfaceModelImageFindSurfaceModelImage, refine_surface_model_pose_imagerefine_surface_model_pose_imageRefineSurfaceModelPoseImageRefineSurfaceModelPoseImageRefineSurfaceModelPoseImage

See also

refine_surface_model_poserefine_surface_model_poseRefineSurfaceModelPoseRefineSurfaceModelPoseRefineSurfaceModelPose, find_surface_model_imagefind_surface_model_imageFindSurfaceModelImageFindSurfaceModelImageFindSurfaceModelImage

Module

3D Metrology