ClassesClasses | | Operators

train_model_componentsT_train_model_componentsTrainModelComponentsTrainModelComponents (Operator)

Name

train_model_componentsT_train_model_componentsTrainModelComponentsTrainModelComponents — Train components and relations for the component-based matching.

Signature

train_model_components(ModelImage, InitialComponents, TrainingImages : ModelComponents : ContrastLow, ContrastHigh, MinSize, MinScore, SearchRowTol, SearchColumnTol, SearchAngleTol, TrainingEmphasis, AmbiguityCriterion, MaxContourOverlap, ClusterThreshold : ComponentTrainingID)

Herror T_train_model_components(const Hobject ModelImage, const Hobject InitialComponents, const Hobject TrainingImages, Hobject* ModelComponents, const Htuple ContrastLow, const Htuple ContrastHigh, const Htuple MinSize, const Htuple MinScore, const Htuple SearchRowTol, const Htuple SearchColumnTol, const Htuple SearchAngleTol, const Htuple TrainingEmphasis, const Htuple AmbiguityCriterion, const Htuple MaxContourOverlap, const Htuple ClusterThreshold, Htuple* ComponentTrainingID)

void TrainModelComponents(const HObject& ModelImage, const HObject& InitialComponents, const HObject& TrainingImages, HObject* ModelComponents, const HTuple& ContrastLow, const HTuple& ContrastHigh, const HTuple& MinSize, const HTuple& MinScore, const HTuple& SearchRowTol, const HTuple& SearchColumnTol, const HTuple& SearchAngleTol, const HTuple& TrainingEmphasis, const HTuple& AmbiguityCriterion, const HTuple& MaxContourOverlap, const HTuple& ClusterThreshold, HTuple* ComponentTrainingID)

void HComponentTraining::HComponentTraining(const HImage& ModelImage, const HRegion& InitialComponents, const HImage& TrainingImages, HRegion* ModelComponents, const HTuple& ContrastLow, const HTuple& ContrastHigh, const HTuple& MinSize, const HTuple& MinScore, const HTuple& SearchRowTol, const HTuple& SearchColumnTol, const HTuple& SearchAngleTol, const HString& TrainingEmphasis, const HString& AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold)

void HComponentTraining::HComponentTraining(const HImage& ModelImage, const HRegion& InitialComponents, const HImage& TrainingImages, HRegion* ModelComponents, Hlong ContrastLow, Hlong ContrastHigh, Hlong MinSize, double MinScore, Hlong SearchRowTol, Hlong SearchColumnTol, double SearchAngleTol, const HString& TrainingEmphasis, const HString& AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold)

void HComponentTraining::HComponentTraining(const HImage& ModelImage, const HRegion& InitialComponents, const HImage& TrainingImages, HRegion* ModelComponents, Hlong ContrastLow, Hlong ContrastHigh, Hlong MinSize, double MinScore, Hlong SearchRowTol, Hlong SearchColumnTol, double SearchAngleTol, const char* TrainingEmphasis, const char* AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold)

HRegion HComponentTraining::TrainModelComponents(const HImage& ModelImage, const HRegion& InitialComponents, const HImage& TrainingImages, const HTuple& ContrastLow, const HTuple& ContrastHigh, const HTuple& MinSize, const HTuple& MinScore, const HTuple& SearchRowTol, const HTuple& SearchColumnTol, const HTuple& SearchAngleTol, const HString& TrainingEmphasis, const HString& AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold)

HRegion HComponentTraining::TrainModelComponents(const HImage& ModelImage, const HRegion& InitialComponents, const HImage& TrainingImages, Hlong ContrastLow, Hlong ContrastHigh, Hlong MinSize, double MinScore, Hlong SearchRowTol, Hlong SearchColumnTol, double SearchAngleTol, const HString& TrainingEmphasis, const HString& AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold)

HRegion HComponentTraining::TrainModelComponents(const HImage& ModelImage, const HRegion& InitialComponents, const HImage& TrainingImages, Hlong ContrastLow, Hlong ContrastHigh, Hlong MinSize, double MinScore, Hlong SearchRowTol, Hlong SearchColumnTol, double SearchAngleTol, const char* TrainingEmphasis, const char* AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold)

HRegion HImage::TrainModelComponents(const HRegion& InitialComponents, const HImage& TrainingImages, const HTuple& ContrastLow, const HTuple& ContrastHigh, const HTuple& MinSize, const HTuple& MinScore, const HTuple& SearchRowTol, const HTuple& SearchColumnTol, const HTuple& SearchAngleTol, const HString& TrainingEmphasis, const HString& AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold, HComponentTraining* ComponentTrainingID) const

HRegion HImage::TrainModelComponents(const HRegion& InitialComponents, const HImage& TrainingImages, Hlong ContrastLow, Hlong ContrastHigh, Hlong MinSize, double MinScore, Hlong SearchRowTol, Hlong SearchColumnTol, double SearchAngleTol, const HString& TrainingEmphasis, const HString& AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold, HComponentTraining* ComponentTrainingID) const

HRegion HImage::TrainModelComponents(const HRegion& InitialComponents, const HImage& TrainingImages, Hlong ContrastLow, Hlong ContrastHigh, Hlong MinSize, double MinScore, Hlong SearchRowTol, Hlong SearchColumnTol, double SearchAngleTol, const char* TrainingEmphasis, const char* AmbiguityCriterion, double MaxContourOverlap, double ClusterThreshold, HComponentTraining* ComponentTrainingID) const

static void HOperatorSet.TrainModelComponents(HObject modelImage, HObject initialComponents, HObject trainingImages, out HObject modelComponents, HTuple contrastLow, HTuple contrastHigh, HTuple minSize, HTuple minScore, HTuple searchRowTol, HTuple searchColumnTol, HTuple searchAngleTol, HTuple trainingEmphasis, HTuple ambiguityCriterion, HTuple maxContourOverlap, HTuple clusterThreshold, out HTuple componentTrainingID)

public HComponentTraining(HImage modelImage, HRegion initialComponents, HImage trainingImages, out HRegion modelComponents, HTuple contrastLow, HTuple contrastHigh, HTuple minSize, HTuple minScore, HTuple searchRowTol, HTuple searchColumnTol, HTuple searchAngleTol, string trainingEmphasis, string ambiguityCriterion, double maxContourOverlap, double clusterThreshold)

public HComponentTraining(HImage modelImage, HRegion initialComponents, HImage trainingImages, out HRegion modelComponents, int contrastLow, int contrastHigh, int minSize, double minScore, int searchRowTol, int searchColumnTol, double searchAngleTol, string trainingEmphasis, string ambiguityCriterion, double maxContourOverlap, double clusterThreshold)

HRegion HComponentTraining.TrainModelComponents(HImage modelImage, HRegion initialComponents, HImage trainingImages, HTuple contrastLow, HTuple contrastHigh, HTuple minSize, HTuple minScore, HTuple searchRowTol, HTuple searchColumnTol, HTuple searchAngleTol, string trainingEmphasis, string ambiguityCriterion, double maxContourOverlap, double clusterThreshold)

HRegion HComponentTraining.TrainModelComponents(HImage modelImage, HRegion initialComponents, HImage trainingImages, int contrastLow, int contrastHigh, int minSize, double minScore, int searchRowTol, int searchColumnTol, double searchAngleTol, string trainingEmphasis, string ambiguityCriterion, double maxContourOverlap, double clusterThreshold)

HRegion HImage.TrainModelComponents(HRegion initialComponents, HImage trainingImages, HTuple contrastLow, HTuple contrastHigh, HTuple minSize, HTuple minScore, HTuple searchRowTol, HTuple searchColumnTol, HTuple searchAngleTol, string trainingEmphasis, string ambiguityCriterion, double maxContourOverlap, double clusterThreshold, out HComponentTraining componentTrainingID)

HRegion HImage.TrainModelComponents(HRegion initialComponents, HImage trainingImages, int contrastLow, int contrastHigh, int minSize, double minScore, int searchRowTol, int searchColumnTol, double searchAngleTol, string trainingEmphasis, string ambiguityCriterion, double maxContourOverlap, double clusterThreshold, out HComponentTraining componentTrainingID)

Description

train_model_componentstrain_model_componentsTrainModelComponentsTrainModelComponentsTrainModelComponents extracts the final (rigid) model components and trains their mutual relations, i.e., their relative movements, on the basis of the initial components by considering several training images. The result of the training is returned in the handle ComponentTrainingIDComponentTrainingIDComponentTrainingIDComponentTrainingIDcomponentTrainingID. The training result can be subsequently used to create the actual component model using create_trained_component_modelcreate_trained_component_modelCreateTrainedComponentModelCreateTrainedComponentModelCreateTrainedComponentModel.

train_model_componentstrain_model_componentsTrainModelComponentsTrainModelComponentsTrainModelComponents should be used in cases where the relations of the components are not known and should be trained automatically. In contrast, if the relations are known no training needs to be performed with train_model_componentstrain_model_componentsTrainModelComponentsTrainModelComponentsTrainModelComponents. Instead, the component model can be directly created with create_component_modelcreate_component_modelCreateComponentModelCreateComponentModelCreateComponentModel.

If the initial components have been automatically created by using gen_initial_componentsgen_initial_componentsGenInitialComponentsGenInitialComponentsGenInitialComponents, InitialComponentsInitialComponentsInitialComponentsInitialComponentsinitialComponents contains the contour regions of the initial components. In contrast, if the initial components should be defined by the user, they can be directly passed in InitialComponentsInitialComponentsInitialComponentsInitialComponentsinitialComponents. However, instead of the contour regions for each initial component, its enclosing region must be passed in the tuple. The (contour) regions refer to the model image ModelImageModelImageModelImageModelImagemodelImage. If the initial components have been obtained using gen_initial_componentsgen_initial_componentsGenInitialComponentsGenInitialComponentsGenInitialComponents, the model image should be the same as in gen_initial_componentsgen_initial_componentsGenInitialComponentsGenInitialComponentsGenInitialComponents. Please note that each initial component is part of at most one rigid model component. This is because during the training initial components can be merged into rigid model components if required (see below). However, they cannot be split and distributed to several rigid model components.

train_model_componentstrain_model_componentsTrainModelComponentsTrainModelComponentsTrainModelComponents uses the following approach to perform the training: In the first step, the initial components are searched in all training images. In some cases, one initial component may be found in an training image more than once. Thus, in the second step, the resulting ambiguities are solved, i.e., the most probable pose of each initial component is found. Consequently, after solving the ambiguities, in all training images at most one pose of each initial component is available. In the next step the poses are analyzed and those initial components that do not show any relative movement are clustered to the final rigid model components. Finally, in the last step the relations between the model components are computed by analyzing their relative poses over the sequence of training images. The parameters that are associated with the mentioned steps are explained in the following.

The training is performed based on several training images, which are passed in TrainingImagesTrainingImagesTrainingImagesTrainingImagestrainingImages. Each training image must show at most one instance of the compound object and should contain the full range of allowed relative movements of the model components. If, for example, the component model of an on/off switch should be trained, one training image that shows the switch turned off is sufficient if the switch in the model image is turned on, or vice versa.

The principle of the training is to find the initial components in all training images and to analyze their poses. For this, for each initial component a shape model is created (see create_shape_modelcreate_shape_modelCreateShapeModelCreateShapeModelCreateShapeModel), which is then used to determine the poses (position and orientation) of the initial components in the training images (see find_shape_modelfind_shape_modelFindShapeModelFindShapeModelFindShapeModel). Depending on the mode that is set by using set_system('pregenerate_shape_models',...)set_system("pregenerate_shape_models",...)SetSystem("pregenerate_shape_models",...)SetSystem("pregenerate_shape_models",...)SetSystem("pregenerate_shape_models",...), the shape model is either pregenerated completely or computed online during the search. The mode influences the computation time as well as the robustness of the training. Furthermore, it should be noted that if single-channel image are used in ModelImageModelImageModelImageModelImagemodelImage as well as in TrainingImagesTrainingImagesTrainingImagesTrainingImagestrainingImages the metric 'use_polarity'"use_polarity""use_polarity""use_polarity""use_polarity" is used internally for create_shape_modelcreate_shape_modelCreateShapeModelCreateShapeModelCreateShapeModel, while if multichannel images are used in either ModelImageModelImageModelImageModelImagemodelImage or TrainingImagesTrainingImagesTrainingImagesTrainingImagestrainingImages the metric 'ignore_color_polarity'"ignore_color_polarity""ignore_color_polarity""ignore_color_polarity""ignore_color_polarity" is used. Finally, it should be noted that while the number of channels in ModelImageModelImageModelImageModelImagemodelImage and TrainingImagesTrainingImagesTrainingImagesTrainingImagestrainingImages may be different, e.g., to facilitate model generation from synthetically generated images, the number of channels in all the images in TrainingImagesTrainingImagesTrainingImagesTrainingImagestrainingImages must be identical. For further details see create_shape_modelcreate_shape_modelCreateShapeModelCreateShapeModelCreateShapeModel. The creation of the shape models can be influenced by choosing appropriate values for the parameters ContrastLowContrastLowContrastLowContrastLowcontrastLow, ContrastHighContrastHighContrastHighContrastHighcontrastHigh, and MinSizeMinSizeMinSizeMinSizeminSize. These parameters have the same meaning as in gen_initial_componentsgen_initial_componentsGenInitialComponentsGenInitialComponentsGenInitialComponents and can be automatically determined by passing 'auto'"auto""auto""auto""auto": If both hysteresis threshold should be automatically determined, both ContrastLowContrastLowContrastLowContrastLowcontrastLow and ContrastHighContrastHighContrastHighContrastHighcontrastHigh must be set to 'auto'"auto""auto""auto""auto". In contrast, if only one threshold value should be determined, ContrastLowContrastLowContrastLowContrastLowcontrastLow must be set to 'auto'"auto""auto""auto""auto" while ContrastHighContrastHighContrastHighContrastHighcontrastHigh must be set to an arbitrary value different from 'auto'"auto""auto""auto""auto". If the initial components have been automatically created by gen_initial_componentsgen_initial_componentsGenInitialComponentsGenInitialComponentsGenInitialComponents, the parameters ContrastLowContrastLowContrastLowContrastLowcontrastLow, ContrastHighContrastHighContrastHighContrastHighcontrastHigh, and MinSizeMinSizeMinSizeMinSizeminSize should be set to the same values as in gen_initial_componentsgen_initial_componentsGenInitialComponentsGenInitialComponentsGenInitialComponents.

To influence the search for the initial components, the parameters MinScoreMinScoreMinScoreMinScoreminScore, SearchRowTolSearchRowTolSearchRowTolSearchRowTolsearchRowTol, SearchColumnTolSearchColumnTolSearchColumnTolSearchColumnTolsearchColumnTol, SearchAngleTolSearchAngleTolSearchAngleTolSearchAngleTolsearchAngleTol, and TrainingEmphasisTrainingEmphasisTrainingEmphasisTrainingEmphasistrainingEmphasis can be set. The parameter MinScoreMinScoreMinScoreMinScoreminScore determines what score a potential match must at least have to be regarded as an instance of the initial component in the training image. The larger MinScoreMinScoreMinScoreMinScoreminScore is chosen, the faster the training is. If the initial components can be expected never to be occluded in the training images, MinScoreMinScoreMinScoreMinScoreminScore may be set as high as 0.8 or even 0.9 (see find_shape_modelfind_shape_modelFindShapeModelFindShapeModelFindShapeModel).

By default, the components are searched only at points in which the component lies completely within the respective training image. This means that a component will not be found if it extends beyond the borders of the image, even if it would achieve a score greater than MinScoreMinScoreMinScoreMinScoreminScore. This behavior can be changed with set_system('border_shape_models','true')set_system("border_shape_models","true")SetSystem("border_shape_models","true")SetSystem("border_shape_models","true")SetSystem("border_shape_models","true"), which will cause components that extend beyond the image border to be found if they achieve a score greater than MinScoreMinScoreMinScoreMinScoreminScore. Here, points lying outside the image are regarded as being occluded, i.e., they lower the score. It should be noted that the runtime of the training will increase in this mode.

When dealing with a high number of initial components and many training images, the training may take a long time (up to several minutes). In order to speed up the training it is possible to restrict the search space for the single initial components in the training images. For this, the poses of the initial components in the model image are used as reference pose. The parameters SearchRowTolSearchRowTolSearchRowTolSearchRowTolsearchRowTol and SearchColumnTolSearchColumnTolSearchColumnTolSearchColumnTolsearchColumnTol specify the position tolerance region relative to the reference position in which the search is performed. Assume, for example, that the position of an initial component in the model image is (100,200) and SearchRowTolSearchRowTolSearchRowTolSearchRowTolsearchRowTol is set to 20 and SearchColumnTolSearchColumnTolSearchColumnTolSearchColumnTolsearchColumnTol is set to 10. Then, this initial component is searched in the training images only within the axis-aligned rectangle that is determined by the upper left corner (80,190) and the lower right corner (120,210). The same holds for the orientation angle range, which can be restricted by specifying the angle tolerance SearchAngleTolSearchAngleTolSearchAngleTolSearchAngleTolsearchAngleTol to the angle range of [-SearchAngleTolSearchAngleTolSearchAngleTolSearchAngleTolsearchAngleTol,+SearchAngleTolSearchAngleTolSearchAngleTolSearchAngleTolsearchAngleTol]. Thus, it is possible to considerably reduce the computational effort during the training by an adequate acquisition of the training images. If one of the three parameters is set to -1, no restriction of the search space is applied in the corresponding dimension.

The input parameters ContrastLowContrastLowContrastLowContrastLowcontrastLow, ContrastHighContrastHighContrastHighContrastHighcontrastHigh, MinSizeMinSizeMinSizeMinSizeminSize, MinScoreMinScoreMinScoreMinScoreminScore, SearchRowTolSearchRowTolSearchRowTolSearchRowTolsearchRowTol, SearchColumnTolSearchColumnTolSearchColumnTolSearchColumnTolsearchColumnTol, and SearchAngleTolSearchAngleTolSearchAngleTolSearchAngleTolsearchAngleTol must either contain one element, in which case the parameter is used for all initial components, or must contain the same number of elements as the initial components contained in InitialComponentsInitialComponentsInitialComponentsInitialComponentsinitialComponents, in which case each parameter element refers to the corresponding initial component in InitialComponentsInitialComponentsInitialComponentsInitialComponentsinitialComponents.

The parameter TrainingEmphasisTrainingEmphasisTrainingEmphasisTrainingEmphasistrainingEmphasis offers another possibility to influence the computation time of the training and to simultaneously affect its robustness. If TrainingEmphasisTrainingEmphasisTrainingEmphasisTrainingEmphasistrainingEmphasis is set to 'speed'"speed""speed""speed""speed", on the one hand the training is comparatively fast, on the other hand it may happen in some cases that some initial components are not found in the training images or are found at a wrong pose. Consequently, this would lead to an incorrect computation of the rigid model components and their relations. The poses of the found initial components in the individual training images can be examined by using get_training_componentsget_training_componentsGetTrainingComponentsGetTrainingComponentsGetTrainingComponents. If erroneous matches occur the training should be restarted with TrainingEmphasisTrainingEmphasisTrainingEmphasisTrainingEmphasistrainingEmphasis set to 'reliability'"reliability""reliability""reliability""reliability". This results in a higher robustness at the cost of a longer computation time.

Furthermore, during the pose determination of the initial components ambiguities may occur if the initial components are rotationally symmetric or if several initial components are identical or at least similar to each other. To solve the ambiguities, the most probable pose is calculated for each initial component in each training image. For this, the individual ambiguous poses are evaluated. The pose of an initial component receives a good evaluation if the relative pose of the initial component with respect to the other initial components is similar to the corresponding relative pose in the model image. The method to evaluate this similarity can be chosen with AmbiguityCriterionAmbiguityCriterionAmbiguityCriterionAmbiguityCriterionambiguityCriterion. In almost all cases the best results are obtained with 'rigidity'"rigidity""rigidity""rigidity""rigidity", which assumes the rigidity of the compound object. The more the rigidity of the compound object is violated by the pose of the initial component, the worse its evaluation is. In the case of 'distance'"distance""distance""distance""distance", only the distance between the initial components is considered during the evaluation. Hence, the pose of the initial component receives a good evaluation if its distances to the other initial components is similar to the corresponding distances in the model image. Accordingly, when choosing 'orientation'"orientation""orientation""orientation""orientation", only the relative orientation is considered during the evaluation. Finally, the simultaneous consideration of distance and orientation can be achieved by choosing 'distance_orientation'"distance_orientation""distance_orientation""distance_orientation""distance_orientation". In contrast to 'rigidity'"rigidity""rigidity""rigidity""rigidity", the relative pose of the initial components is not considered when using 'distance_orientation'"distance_orientation""distance_orientation""distance_orientation""distance_orientation".

The process of solving the ambiguities can be further influenced by the parameter MaxContourOverlapMaxContourOverlapMaxContourOverlapMaxContourOverlapmaxContourOverlap. This parameter describes the extent by which the contours of two initial component matches may overlap each other. Let the letters 'I' and 'T', for example, be two initial components that should be searched in a training image that shows the string 'IT'. Then, the initial component 'T' should be found at its correct pose. In contrast, the initial component 'I' will be found at its correct pose ('I') but also at the pose of the 'T' because of the similarity of the two components. To discard the wrong match of the initial component 'I', an appropriate value for MaxContourOverlapMaxContourOverlapMaxContourOverlapMaxContourOverlapmaxContourOverlap can be chosen: If overlapping matches should be tolerated, MaxContourOverlapMaxContourOverlapMaxContourOverlapMaxContourOverlapmaxContourOverlap should be set to 1. If overlapping matches should be completely avoided, MaxContourOverlapMaxContourOverlapMaxContourOverlapMaxContourOverlapmaxContourOverlap should be set to 0. By choosing a value between 0 and 1, the maximum percentage of overlapping contour pixels can be adjusted.

The decision which initial components can be clustered to rigid model components is made based on the poses of the initial components in the model image and in the training images. Two initial components are merged if they do not show any relative movement over all images. Assume that in the case of the above mentioned switch the training image would show the same switch state as the model image, the algorithm would merge the respective initial components because it assumes that the entire switch is one rigid model component. The extent by which initial components are merged can be influenced with the parameter ClusterThresholdClusterThresholdClusterThresholdClusterThresholdclusterThreshold. This cluster threshold is based on the probability that two initial components belong to the same rigid model component. Thus, ClusterThresholdClusterThresholdClusterThresholdClusterThresholdclusterThreshold describes the minimum probability which two initial components must have in order to be merged. Since the threshold is based on a probability value, it must lie in the interval between 0 and 1. The greater the threshold is chosen, the smaller the number of initial components that are merged. If a threshold of 0 is chosen, all initial components are merged into one rigid component, while for a threshold of 1 no merging is performed and each initial component is adopted as one rigid model component.

The final rigid model components are returned in ModelComponentsModelComponentsModelComponentsModelComponentsmodelComponents. Later, the index of a component region in ModelComponentsModelComponentsModelComponentsModelComponentsmodelComponents is used to denote the model component. The poses of the components in the training images can be examined by using get_training_componentsget_training_componentsGetTrainingComponentsGetTrainingComponentsGetTrainingComponents.

After the determination of the model components their relative movements are analyzed by determining the movement of one component with respect to a second component for each pair of components. For this, the components are referred to their reference points. The reference point of a component is the center of gravity of its contour region, which is returned in ModelComponentsModelComponentsModelComponentsModelComponentsmodelComponents. It can be calculated by calling area_centerarea_centerAreaCenterAreaCenterAreaCenter. Finally, the relative movement is represented by the smallest enclosing rectangle of arbitrary orientation of the reference point movement and by the smallest enclosing angle interval of the relative orientation of the second component over all images. The determined relations can be inspected by using get_component_relationsget_component_relationsGetComponentRelationsGetComponentRelationsGetComponentRelations.

Execution Information

This operator returns a handle. Note that the state of an instance of this handle type may be changed by specific operators even though the handle is used as an input parameter by those operators.

Parameters

ModelImageModelImageModelImageModelImagemodelImage (input_object)  (multichannel-)image objectHImageHImageHobject (byte / uint2)

Input image from which the shape models of the initial components should be created.

InitialComponentsInitialComponentsInitialComponentsInitialComponentsinitialComponents (input_object)  region-array objectHRegionHRegionHobject

Contour regions or enclosing regions of the initial components.

TrainingImagesTrainingImagesTrainingImagesTrainingImagestrainingImages (input_object)  (multichannel-)image(-array) objectHImageHImageHobject (byte / uint2)

Training images that are used for training the model components.

ModelComponentsModelComponentsModelComponentsModelComponentsmodelComponents (output_object)  region(-array) objectHRegionHRegionHobject *

Contour regions of rigid model components.

ContrastLowContrastLowContrastLowContrastLowcontrastLow (input_control)  integer(-array) HTupleHTupleHtuple (integer / string) (int / long / string) (Hlong / HString) (Hlong / char*)

Lower hysteresis threshold for the contrast of the initial components in the image.

Default value: 'auto' "auto" "auto" "auto" "auto"

Suggested values: 'auto'"auto""auto""auto""auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160

Restriction: ContrastLow > 0

ContrastHighContrastHighContrastHighContrastHighcontrastHigh (input_control)  integer(-array) HTupleHTupleHtuple (integer / string) (int / long / string) (Hlong / HString) (Hlong / char*)

Upper hysteresis threshold for the contrast of the initial components in the image.

Default value: 'auto' "auto" "auto" "auto" "auto"

Suggested values: 'auto'"auto""auto""auto""auto", 10, 20, 30, 40, 60, 80, 100, 120, 140, 160

Restriction: ContrastHigh > 0 && ContrastHigh >= ContrastLow

MinSizeMinSizeMinSizeMinSizeminSize (input_control)  integer(-array) HTupleHTupleHtuple (integer / string) (int / long / string) (Hlong / HString) (Hlong / char*)

Minimum size of connected contour regions.

Default value: 'auto' "auto" "auto" "auto" "auto"

Suggested values: 'auto'"auto""auto""auto""auto", 0, 5, 10, 20, 30, 40

Restriction: MinSize >= 0

MinScoreMinScoreMinScoreMinScoreminScore (input_control)  real(-array) HTupleHTupleHtuple (real) (double) (double) (double)

Minimum score of the instances of the initial components to be found.

Default value: 0.5

Suggested values: 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0

Minimum increment: 0.01

Recommended increment: 0.05

Restriction: 0 <= MinScore && MinScore <= 1

SearchRowTolSearchRowTolSearchRowTolSearchRowTolsearchRowTol (input_control)  integer(-array) HTupleHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Search tolerance in row direction.

Default value: -1

Suggested values: 0, 10, 20, 30, 50, 100

Restriction: SearchRowTol == -1 || SearchColumnTol >= 0

SearchColumnTolSearchColumnTolSearchColumnTolSearchColumnTolsearchColumnTol (input_control)  integer(-array) HTupleHTupleHtuple (integer) (int / long) (Hlong) (Hlong)

Search tolerance in column direction.

Default value: -1

Suggested values: 0, 10, 20, 30, 50, 100

Restriction: SearchColumnTol == -1 || SearchColumnTol >= 0

SearchAngleTolSearchAngleTolSearchAngleTolSearchAngleTolsearchAngleTol (input_control)  angle.rad(-array) HTupleHTupleHtuple (real) (double) (double) (double)

Angle search tolerance.

Default value: -1

Suggested values: 0.0, 0.17, 0.39, 0.78, 1.57

Restriction: SearchAngleTol == -1 || SearchAngleTol >= 0

TrainingEmphasisTrainingEmphasisTrainingEmphasisTrainingEmphasistrainingEmphasis (input_control)  string HTupleHTupleHtuple (string) (string) (HString) (char*)

Decision whether the training emphasis should lie on a fast computation or on a high robustness.

Default value: 'speed' "speed" "speed" "speed" "speed"

List of values: 'reliability'"reliability""reliability""reliability""reliability", 'speed'"speed""speed""speed""speed"

AmbiguityCriterionAmbiguityCriterionAmbiguityCriterionAmbiguityCriterionambiguityCriterion (input_control)  string HTupleHTupleHtuple (string) (string) (HString) (char*)

Criterion for solving ambiguous matches of the initial components in the training images.

Default value: 'rigidity' "rigidity" "rigidity" "rigidity" "rigidity"

List of values: 'distance'"distance""distance""distance""distance", 'distance_orientation'"distance_orientation""distance_orientation""distance_orientation""distance_orientation", 'orientation'"orientation""orientation""orientation""orientation", 'rigidity'"rigidity""rigidity""rigidity""rigidity"

MaxContourOverlapMaxContourOverlapMaxContourOverlapMaxContourOverlapmaxContourOverlap (input_control)  real HTupleHTupleHtuple (real) (double) (double) (double)

Maximum contour overlap of the found initial components in a training image.

Default value: 0.2

Suggested values: 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0

Minimum increment: 0.01

Recommended increment: 0.05

Restriction: 0 <= MaxContourOverlap && MaxContourOverlap <= 1

ClusterThresholdClusterThresholdClusterThresholdClusterThresholdclusterThreshold (input_control)  real HTupleHTupleHtuple (real) (double) (double) (double)

Threshold for clustering the initial components.

Default value: 0.5

Suggested values: 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0

Restriction: 0 <= ClusterThreshold && ClusterThreshold <= 1

ComponentTrainingIDComponentTrainingIDComponentTrainingIDComponentTrainingIDcomponentTrainingID (output_control)  component_training HComponentTraining, HTupleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the training result.

Example (HDevelop)

* Get the model image.
read_image (ModelImage, 'model_image.tif')
* Define the regions for the initial components.
gen_rectangle2 (InitialComponentRegions, 212, 233, 0.62, 167, 29)
gen_rectangle2 (Rectangle2, 298, 363, 1.17, 162, 34)
gen_rectangle2 (Rectangle3, 63, 444, -0.26, 50, 27)
gen_rectangle2 (Rectangle4, 120, 473, 0, 33, 20)
concat_obj (InitialComponentRegions, Rectangle2, InitialComponentRegions)
concat_obj (InitialComponentRegions, Rectangle3, InitialComponentRegions)
concat_obj (InitialComponentRegions, Rectangle4, InitialComponentRegions)
* Get the training images.
gen_empty_obj (TrainingImages)
for i := 1 to 4 by 1
    read_image (TrainingImage, 'training_image-'+i+'.tif')
    concat_obj (TrainingImages, TrainingImage, TrainingImages)
endfor
* Extract the model components and train the relations.
train_model_components (ModelImage, InitialComponentRegions, \
                        TrainingImages, ModelComponents, 22, 60, 30, 0.6, \
                        0, 0, rad(60), 'speed', 'rigidity', 0.2, 0.4, \
                        ComponentTrainingID)

Result

If the parameter values are correct, the operator train_model_componentstrain_model_componentsTrainModelComponentsTrainModelComponentsTrainModelComponents returns the value 2 (H_MSG_TRUE). If the input is empty (no input images are available) the behavior can be set via set_system('no_object_result',<Result>)set_system("no_object_result",<Result>)SetSystem("no_object_result",<Result>)SetSystem("no_object_result",<Result>)SetSystem("no_object_result",<Result>). If necessary, an exception is raised.

Possible Predecessors

gen_initial_componentsgen_initial_componentsGenInitialComponentsGenInitialComponentsGenInitialComponents

Possible Successors

inspect_clustered_componentsinspect_clustered_componentsInspectClusteredComponentsInspectClusteredComponentsInspectClusteredComponents, cluster_model_componentscluster_model_componentsClusterModelComponentsClusterModelComponentsClusterModelComponents, modify_component_relationsmodify_component_relationsModifyComponentRelationsModifyComponentRelationsModifyComponentRelations, write_training_componentswrite_training_componentsWriteTrainingComponentsWriteTrainingComponentsWriteTrainingComponents, get_training_componentsget_training_componentsGetTrainingComponentsGetTrainingComponentsGetTrainingComponents, get_component_relationsget_component_relationsGetComponentRelationsGetComponentRelationsGetComponentRelations, create_trained_component_modelcreate_trained_component_modelCreateTrainedComponentModelCreateTrainedComponentModelCreateTrainedComponentModel, clear_training_componentsclear_training_componentsClearTrainingComponentsClearTrainingComponentsClearTrainingComponents

See also

create_shape_modelcreate_shape_modelCreateShapeModelCreateShapeModelCreateShapeModel, find_shape_modelfind_shape_modelFindShapeModelFindShapeModelFindShapeModel

Module

Matching


ClassesClasses | | Operators