find_scaled_shape_models
— Find the best matches of multiple isotropically scaled shape models.
find_scaled_shape_models(Image : : ModelIDs, AngleStart, AngleExtent, ScaleMin, ScaleMax, MinScore, NumMatches, MaxOverlap, SubPixel, NumLevels, Greediness : Row, Column, Angle, Scale, Score, Model)
The operator find_scaled_shape_models
finds the best
NumMatches
instances of the isotropically scaled shape models
that are passed in ModelIDs
in the input image
Image
. The models must have been created previously by
calling create_scaled_shape_model
or
read_shape_model
.
In contrast to find_scaled_shape_model
, multiple
models can be searched in the same image in one call.
The position, rotation, and scale of the found instances of the
model are returned in Row
, Column
, Angle
, and
Scale
. The score of each found instance is returned in
Score
. The type of the found instances of the models is returned in
Model
. For details see respective sections below.
Compared to find_scaled_shape_model
, the
semantics of all input parameters have changed to some extent. All input
parameters must either contain one element, in which case the
parameter is used for all models, or must contain the same number of
elements as ModelIDs
, in which case each parameter element
refers to the corresponding element in ModelIDs
.
(NumLevels
may also contain either two or twice the number
of elements as ModelIDs
.) More details can be found
below in the sections containing information for the respective parameters.
Note that a call to find_scaled_shape_models
with
multiple values for ModelIDs
, NumMatches
and
MaxOverlap
has the same effect as multiple independent
calls to find_scaled_shape_model
with the respective
parameters. However, a single call to
find_scaled_shape_models
is considerably more efficient.
Image
and its domain:
The domain of the Image
determines the search space
for the reference point of the model, i.e., for the center of
gravity of the domain (region) of the image that was used to create
the shape model with create_scaled_shape_model
. A different
origin set with set_shape_model_origin
is not taken into
account. The model is searched within those points of the domain of
the image, in which the model lies completely within the image.
This means that the model will not be found if it extends beyond the
borders of the image, even if it would achieve a score greater than
MinScore
(see below). Note that, if for a certain pyramid
level the model touches the image border, it might not be found
even if it lies completely within the original image.
As a rule of thumb, the model might not be found if its distance to
an image border falls below .
This behavior can be changed with
set_system('border_shape_models','true')
for all models or
with set_shape_model_param(ModelID, 'border_shape_models','true')
for a specific model, which will cause
models that extend beyond the image border to be found if they
achieve a score greater than MinScore
. Here, points lying
outside the image are regarded as being occluded, i.e., they lower
the score. It should be noted that the runtime of the search will
increase in this mode. Note further, that in rare cases, which occur
typically only for artificial images, the model might not be found
also if for certain pyramid levels the model touches the border
of the reduced domain. Then, it may help to enlarge the reduced
domain by using,
e.g., dilation_circle
.
As usual, the domain
of the input Image
is used to restrict the search
space for the reference point of the models ModelIDs
.
Consistent with the above semantics, the input Image
can therefore contain a single image object or an image object tuple
containing multiple image objects. If Image
contains a
single image object, its domain is used as the region of interest
for all models in ModelIDs
. If Image
contains
multiple image objects, each domain is used as the region of
interest for the corresponding model in ModelIDs
. In this
case, the images have to be identical except for their domains,
i.e., Image
cannot be constructed in an
arbitrary manner using concat_obj
, but must be created from
the same image using add_channels
or equivalent calls. If
this is not the case, an error message is returned.
AngleStart
, AngleExtent
, ScaleMin
,
ScaleMax
:
The parameters AngleStart
and AngleExtent
determine the range of rotations for which the model is searched.
The parameters ScaleMin
and ScaleMax
determine the
range of scales for which the model is searched. If necessary, both
ranges are clipped to the range given when the model was created
with create_scaled_shape_model
. In particular, this means
that the angle ranges of the model and the search must overlap.
Note that in some cases instances with a rotation or scale that is
slightly outside the specified range are found. This may happen if
the specified range is smaller than the range given during the
creation of the model. AngleStart
and AngleExtent
as well as ScaleMin
and ScaleMax
are checked only
at the highest pyramid level. Matches that are found on the highest
pyramid level are refined to the lowest pyramid level. For performance
reasons, however, during the refinement it is no longer checked
whether the matches are still within the specified ranges.
MinScore
:
The parameter MinScore
determines what score a potential
match must at least have to be regarded as an instance of the model
in the image. The larger MinScore
is chosen, the faster
the search is. If the model can be expected never to be occluded in
the images, MinScore
may be set as high as 0.8 or even 0.9.
If the matches are not tracked to the lowest pyramid level (see
below) it might happen that instances with a score slightly below
MinScore
are found.
If a single value is passed in MinScore
, this value is
applied to found instances of all models. If, on the other hand,
MinScore
contains multiple values, the values are applied
separately for the respective model.
In case that the shape models have been extended by clutter parameters with
set_shape_model_clutter
and thus 'use_clutter' is enabled,
MinScore
expects for each minimum score an additional value which
determines what clutter value a potential match must at most have to be
regarded as an instance of the model in the image. The runtime using clutter
parameters will be at least as high as the runtime without clutter parameters
and NumMatches
set to 0. Changing this second value does
not influence the runtime. Note that the different shape models must have the
same value for 'use_clutter' .
If the maximum clutter is specified separately for each model, which is
needed if also the minimum score is set for each model individually,
MinScore
must contain twice the number of elements as ModelIDs. In
this case, the minimum score and the maximum clutter must be specified
interleaved in MinScore
. If, for example, two models are specified
in ModelIDs, the minimum score is 0.9 for the first model and 0.8 for the
second model, and the maximum clutter is 0.1 for the first model and 0.2 for
the second model, MinScore
= [0.9,0.1,0.8,0.2] must be
selected.
NumMatches
:
The maximum number of instances to be found can be determined with
NumMatches
. If more than NumMatches
instances
with a score greater than MinScore
are found in the image,
only the best NumMatches
instances are returned. If fewer
than NumMatches
are found, only that number is returned,
i.e., the parameter MinScore
takes precedence over
NumMatches
. If all model instances exceeding
MinScore
in the image should be found, NumMatches
must be set to 0.
When tracking the matches through the image pyramid, on each level,
some less promising matches are rejected based on NumMatches
. Thus,
it is possible that some matches are rejected that would have had a higher
score on the lowest pyramid level. Due to this, for example, the found
match for NumMatches
set to 1 might be
different from the match with the highest score returned when setting
NumMatches
to 0 or > 1.
If multiple objects with a similar score are expected, but only the one with
the highest score should be returned, it might be preferable to raise
NumMatches
, and then select the match with the highest score.
In case that the shape models have been extended by clutter parameters
using set_shape_model_clutter
, NumMatches
also
considers the second value passed in MinScore
: If more than
NumMatches
instances with a score greater than the first
entry of MinScore
and a clutter value smaller than the second
entry of MinScore
are found in the image,
only the best NumMatches
instances with respect to clutter
are returned. Still,
MinScore
takes precedence over NumMatches
and
NumMatches
must be set to 0 if all model instances
fulfilling the conditions imposed by MinScore
should be
found. Please note that using clutter parameters, when tracking the matches
through the image pyramid, no matches are rejected. Thus the runtime
using clutter parameters will be at least as high as the runtime without
clutter parameters and NumMatches
set to 0.
If NumMatches
contains one element, find_scaled_shape_models
returns the
best NumMatches
instances of the model irrespective of the
type of the model. If, for example, two models are passed in
ModelIDs
and NumMatches
= 2 is selected, it can
happen that two instances of the first model and no instances of the
second model, one instance of the first model and one instance of
the second model, or no instances of the first model and two
instances of the second model are returned. If, on the other hand,
NumMatches
contains multiple values, the number of
instances returned of the different models corresponds to the number
specified in the respective entry in NumMatches
. If, for
example, NumMatches
= [1,1] is selected, one instance of
the first model and one instance of the second model is returned.
MaxOverlap
:
If the model exhibits symmetries it may happen that multiple
instances with similar positions but different rotations are found
in the image. The parameter MaxOverlap
determines by what
fraction (i.e., a number between 0 and 1) two instances may at most
overlap in order to consider them as different instances, and hence
to be returned separately. If two instances overlap each other by
more than MaxOverlap
only the best instance is returned.
The calculation of the overlap is based on the smallest enclosing
rectangle of arbitrary orientation (see smallest_rectangle2
)
of the found instances. If MaxOverlap
=0, the found
instances may not overlap at all, while for MaxOverlap
=1
all instances are returned.
If a single value is passed in
MaxOverlap
, the overlap is computed for all found instances
of the different models, irrespective of the model type, i.e.,
instances of the same or of different models that overlap too much
are eliminated. If, on the other hand, multiple values are passed
in MaxOverlap
, the overlap is only computed for found
instances of the model that have the same model type, i.e., only
instances of the same model that overlap too much are eliminated.
In this mode, models of different types may overlap completely.
SubPixel
:
The parameter SubPixel
determines whether the instances
should be extracted with subpixel accuracy. If SubPixel
is
set to 'none' (or 'false' for backwards
compatibility) the model's pose is only determined with pixel
accuracy and the angle and scale resolution that was specified with
create_scaled_shape_model
. If SubPixel
is set to
'interpolation' (or 'true' ) the position as well
as the rotation and scale are determined with subpixel accuracy. In
this mode, the model's pose is interpolated from the score function.
This mode costs almost no computation time and achieves an accuracy
that is high enough for most applications. In some applications,
however, the accuracy requirements are extremely high. In these
cases, the model's pose can be determined through a least-squares
adjustment, i.e., by minimizing the distances of the model points to
their corresponding image points. In contrast to
'interpolation' , this mode requires additional computation
time. The different modes for least-squares adjustment
('least_squares' , 'least_squares_high' , and
'least_squares_very_high' ) can be used to determine the
accuracy with which the minimum distance is being searched. The
higher the accuracy is chosen, the longer the subpixel extraction
will take, however. Usually, SubPixel
should be set to
'interpolation' . If least-squares adjustment is desired,
'least_squares' should be chosen because this results in
the best trade-off between runtime and accuracy.
Objects that are slightly deformed with respect to the model, in
some cases cannot be found or are found but only with a low
accuracy. For such objects it is possible to additionally pass a
maximal allowable object deformation in the parameter
SubPixel
. The deformation must be specified in pixels. This
can be done by passing the optional parameter value
'max_deformation ' followed by an integer value between
0 and 32 (in the same string), which specifies the
maximum deformation. For example, if the shape of the object may be
deformed by up to 2 pixels with respect to the shape that is stored
in the model, the value 'max_deformation 2' must be passed
in SubPixel
in addition to the above described mode for the
subpixel extraction, i.e., for example
['least_squares', 'max_deformation 2'] .
Passing the value 'max_deformation 0' corresponds to a search
without allowing deformations, i.e., the
behavior is the same as if no 'max_deformation ' is passed.
Note that higher values for the maximum deformation often result in
an increased runtime. Furthermore, the higher the deformation value
is chosen, the higher is the risk of finding wrong model instances.
Both problems mainly arise when searching for small objects or for
objects with fine structures. This is because such kinds of objects
for higher deformations lose their characteristic shape, which is
important for a robust search. Also note that for higher
deformations the accuracy of partially occluded objects might
decrease if clutter is present close to the object. Consequently,
the maximum deformation should be chosen as small as possible and
only as high as necessary.
Approximately rotationally symmetric objects may not be found if
'max_deformation' and AngleExtent
are both set to a value
greater than 0. In that case, ambiguities may occur that cannot be resolved,
and the match is rejected as false. If this happens, try to set either
'max_deformation' or AngleExtent
to 0, or adjust the model
such that symmetries are reduced. When specifying a deformation higher
than 0 the computation of the score depends on the chosen
value for the subpixel extraction. In most cases, the score of a
match changes if 'least_squares' ,
'least_squares_high' , or 'least_squares_very_high'
(see above) is chosen for the subpixel extraction (in comparison to
'none' or 'interpolation' ). Furthermore, if one of
the least-squares adjustments is selected the score might increase
when increasing the maximum deformation because then for the model
points more corresponding image points can be found.
To get a meaningful score value and to avoid erroneous matches, we
recommend to always combine the allowance of a deformation with a
least-squares adjustment.
If the subpixel extraction and/or the maximum object deformation is
specified separately for each model, for each model passed in
ModelIDs
exactly one value for the subpixel extraction must
be passed in SubPixel
. After each value for the subpixel extraction
optionally a second value can be passed, which describes the maximum
object deformation of the corresponding mode. If for a certain model no
value for the maximum object deformation is passed, the model is searched
without taking deformations into account. For example, if two models are
passed in ModelIDs
and for the first model the subpixel extraction
is set to 'interpolation' and no object deformations are allowed
and for the second model the subpixel extraction is set to
'least_squares' and a maximum object deformation of 3
pixels is allowed, then the tuple
['interpolation', 'least_squares', 'max_deformation 3'] must be
passed in SubPixel
. Alternatively, the equivalent tuple
['interpolation', 'max_deformation 0', 'least_squares',
'max_deformation 3'] may be passed.
NumLevels
:
The number of pyramid levels used during the search is determined
with NumLevels
. If necessary, the number of levels is
clipped to the range given when the shape model was created with
create_scaled_shape_model
. If NumLevels
is set to
0, the number of pyramid levels specified in
create_scaled_shape_model
is used.
In certain cases, the number of pyramid levels that was determined
automatically with, for example, create_scaled_shape_model
may be
too high. The consequence may be that some matches that may have a high
final score are rejected on the highest pyramid level and thus are not found.
Instead of setting MinScore
to a very low value to find all matches,
it may be better to query the value of NumLevels
with
get_shape_model_params
and then use a slightly lower value in
find_scaled_shape_models
. This approach is often better regarding
the speed and robustness of the matching.
Optionally,
NumLevels
can contain a second value that determines the
lowest pyramid level to which the found matches are tracked. Hence,
a value of [4,2] for NumLevels
means that the
matching starts at the fourth pyramid level and tracks the matches
to the second lowest pyramid level (the lowest pyramid level is
denoted by a value of 1). This mechanism can be used to decrease
the runtime of the matching. It should be noted, however, that in
general the accuracy of the extracted pose parameters is lower in
this mode than in the normal mode, in which the matches are tracked
to the lowest pyramid level. Hence, if a high accuracy is desired,
SubPixel
should be set to at least
'least_squares' . If the lowest pyramid level to use is
chosen too large, it may happen that the desired accuracy cannot be
achieved, or that wrong instances of the model are found because the
model is not specific enough on the higher pyramid levels to
facilitate a reliable selection of the correct instance of the
model. In this case, the lowest pyramid level to use must be set to
a smaller value.
If the lowest pyramid level is specified
separately for each model, NumLevels
must contain twice the
number of elements as ModelIDs
. In this case, the number
of pyramid levels and the lowest pyramid level must be specified
interleaved in NumLevels
. If, for example, two models are
specified in ModelIDs
, the number of pyramid levels is 5
for the first model and 4 for the second model, and the lowest
pyramid level is 2 for the first model and 1 for the second model,
NumLevels
= [5,2,4,1] must be selected. If
exactly two models are specified in ModelIDs
, a special
case occurs. If in this case the lowest pyramid level is to be
specified, the number of pyramid levels and the lowest pyramid level
must be specified explicitly for both models, even if they are
identical, because specifying two values in NumLevels
is
interpreted as the explicit specification of the number of pyramid
levels for the two models.
In input images of poor quality, i.e., in images that are, e.g.,
defocused, deformed, or noisy, often no instances of the shape model
can be found on the lowest pyramid level. The reason for this
behavior is the missing or deformed edge information which is a
result of the poor image quality. Nevertheless, the edge
information may be sufficient on higher pyramid levels. But keep in
mind the above mentioned restrictions on accuracy and robustness if
instances that were found on higher pyramid levels are used. The
selection of the suitable pyramid level, i.e., the lowest pyramid
level on which at least one instance of the shape model can be
found, depends on the model and on the input image. This pyramid
level may vary from image to image. To facilitate the matching on
images of poor quality, the lowest pyramid level on which at least
one instance of the model can be found can be determined
automatically during the matching. To activate this mechanism, i.e.,
to use the so-called 'increased tolerance mode', the
lowest pyramid level must be specified negatively in
NumLevels
. If, e.g., NumLevels
is set to
[5,2,4,-1], the lowest pyramid level for the first model is
set to 2. If no instance of the first model can be found on this
pyramid level, no result will be returned for this model. For the
second shape model, the lowest pyramid level is set to -1.
Therefore, an instance of the shape model is searched on the pyramid
level 1. If no instance of the second model can be found on this
pyramid level, the lowest pyramid level is determined on which at
least one instance of the model can be found. The instances of this
pyramid level will then be returned.
If a model was adapted with adapt_shape_model_high_noise
the estimated lowest pyramid level will be used by default.
However, the user can override the estimated lowest pyramid level by
providing explicitly the lowest pyramid level as described above.
Greediness
:
The parameter Greediness
determines how “greedily” the
search should be carried out. If Greediness
=0, a safe
search heuristic is used, which always finds the model if it is
visible in the image and the other parameters are set appropriately.
However, the search will be relatively time
consuming in this case. If Greediness
=1, an unsafe
search heuristic is used, which may cause the model not to be found
in rare cases, even though it is visible in the image. For
Greediness
=1, the maximum search speed is achieved. In
almost all cases, the shape model will always be found for
Greediness
=0.9.
Row
, Column
, Angle
, and Scale
:
The position, rotation, and scale of the found instances of the
model are returned in Row
, Column
, Angle
,
and Scale
. The coordinates Row
and
Column
are the coordinates of the origin of the shape model
in the search image. By default, the origin is the center of
gravity of the domain (region) of the image that was used to create
the shape model with create_scaled_shape_model
. A different
origin can be set with set_shape_model_origin
.
Note that the coordinates Row
and Column
do not
exactly correspond to the position of the model in the search
image. Thus, you cannot directly use them. Instead, the values are
optimized for creating the transformation matrix with which you can
use the results of the matching for various tasks, e.g., to align
ROIs for other processing steps.
The example given for find_scaled_shape_model
shows how to create
this matrix and use it to display the model at the found position in the
search image.
Note also that for visualizing the model at the found position, also
the procedure dev_display_shape_matching_results
can be used.
Score
:
The score of each found instance is returned in
Score
. The score is a number between 0 and 1, which is an
approximate measure of how much of the model is visible in the
image. If, for example, half of the model is occluded, the score
cannot exceed 0.5.
In case that the shape models have been extended by clutter parameters
using set_shape_model_clutter
, following the above values
Score
also returns the clutter values of each found instance.
If, for example, half of the clutter region is filled by clutter edges,
the clutter value will equal 0.5.
If, e.g., two instances are found, the score is 0.9 for
the first instance and 0.8 for the second instance, and the clutter value
is 0.2 for the first instance and 0.1 for the second instance,
Score
= [0.9,0.8,0.2,0.1] is returned.
Please note that of all shape-based matching results, clutter values are
affected the most when a variation of illumination occurs.
Model
:
The type of the found instances of the models is returned in
Model
. The elements of Model
are indices into the
tuple ModelIDs
, i.e., they can contain values from 0 to
|ModelIDs
|-1. Hence, a value of 0 in an element of
Model
corresponds to an instance of the first model in
ModelIDs
.
Using the operator set_shape_model_param
you can specify a
'timeout' for find_scaled_shape_models
. If the shape models
referenced by ModelIDs
hold different values for 'timeout' ,
find_scaled_shape_models
uses the smallest one.
If find_scaled_shape_models
reaches this 'timeout' , it
terminates without results and returns the error code 9400 (H_ERR_TIMEOUT).
Depending on the scaling range specified by ScaleMin
and
ScaleMax
, find_scaled_shape_models
needs a significant
amount of time to free cached transformations if the shape model is not
pregenerated. As this transformations have to be freed after a timeout
occurs, the runtime of find_scaled_shape_models
exceeds the value of
the specified 'timeout' by this time.
Please note, that the different models that are given with the parameter
ModelIDs
should have been created with the same value of
MinContrast
. If they were created with different values for
MinContrast
, find_scaled_shape_models
will use the smallest
of these values.
To display the results found by shape-based matching,
we highly recommend the usage of the procedure
dev_display_shape_matching_results
.
For an explanation of the different 2D coordinate systems used in HALCON, see the introduction of chapter Transformations / 2D Transformations.
This operator supports cancelling timeouts and interrupts.
Image
(input_object) (multichannel-)image(-array) →
object (byte / uint2)
Input image in which the models should be found.
ModelIDs
(input_control) shape_model(-array) →
(handle)
Handle of the models.
AngleStart
(input_control) angle.rad(-array) →
(real)
Smallest rotation of the models.
Default value: -0.39
Suggested values: -3.14, -1.57, -0.79, -0.39, -0.20, 0.0
AngleExtent
(input_control) angle.rad(-array) →
(real)
Extent of the rotation angles.
Default value: 0.78
Suggested values: 6.29, 3.14, 1.57, 0.79, 0.39, 0.0
Restriction: AngleExtent >= 0
ScaleMin
(input_control) number(-array) →
(real)
Minimum scale of the models.
Default value: 0.9
Suggested values: 0.5, 0.6, 0.7, 0.8, 0.9, 1.0
Restriction: ScaleMin > 0
ScaleMax
(input_control) number(-array) →
(real)
Maximum scale of the models.
Default value: 1.1
Suggested values: 1.0, 1.1, 1.2, 1.3, 1.4, 1.5
Restriction: ScaleMax >= ScaleMin
MinScore
(input_control) real(-array) →
(real)
Minimum score of the instances of the models to be found.
Default value: 0.5
Suggested values: 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0
Typical range of values: 0
≤
MinScore
≤
1
Minimum increment: 0.01
Recommended increment: 0.05
NumMatches
(input_control) integer(-array) →
(integer)
Number of instances of the models to be found (or 0 for all matches).
Default value: 1
Suggested values: 0, 1, 2, 3, 4, 5, 10, 20
MaxOverlap
(input_control) real(-array) →
(real)
Maximum overlap of the instances of the models to be found.
Default value: 0.5
Suggested values: 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0
Typical range of values: 0
≤
MaxOverlap
≤
1
Minimum increment: 0.01
Recommended increment: 0.05
SubPixel
(input_control) string(-array) →
(string)
Subpixel accuracy if not equal to 'none' .
Default value: 'least_squares'
Suggested values: 'none' , 'interpolation' , 'least_squares' , 'least_squares_high' , 'least_squares_very_high' , 'max_deformation 1' , 'max_deformation 2' , 'max_deformation 3' , 'max_deformation 4' , 'max_deformation 5' , 'max_deformation 6'
NumLevels
(input_control) integer(-array) →
(integer)
Number of pyramid levels used in the matching
(and lowest pyramid level to use if
|NumLevels
| = 2).
Default value: 0
List of values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Greediness
(input_control) real(-array) →
(real)
“Greediness” of the search heuristic (0: safe but slow; 1: fast but matches may be missed).
Default value: 0.9
Suggested values: 0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0
Typical range of values: 0
≤
Greediness
≤
1
Minimum increment: 0.01
Recommended increment: 0.05
Row
(output_control) point.y-array →
(real)
Row coordinate of the found instances of the models.
Column
(output_control) point.x-array →
(real)
Column coordinate of the found instances of the models.
Angle
(output_control) angle.rad-array →
(real)
Rotation angle of the found instances of the models.
Scale
(output_control) number-array →
(real)
Scale of the found instances of the models.
Score
(output_control) real-array →
(real)
Score of the found instances of the models.
Model
(output_control) integer-array →
(integer)
Index of the found instances of the models.
read_image (Image, 'pcb_focus/pcb_focus_telecentric_061') gen_rectangle1 (ROI_0, 236, 241, 313, 321) gen_circle (ROI_1, 281, 653, 41) reduce_domain (Image, ROI_0, ImageReduced1) reduce_domain (Image, ROI_1, ImageReduced2) create_scaled_shape_model (ImageReduced1,'auto', -0.39, 0.79, 'auto', 0.9, \ 1.1, 'auto', 'auto', 'use_polarity', 'auto', \ 'auto', ModelID1) create_scaled_shape_model (ImageReduced2, 'auto', -0.39, 0.79, 'auto', 0.9,\ 1.1, 'auto', 'auto', 'use_polarity', 'auto', \ 'auto', ModelID2) ModelIDs:=[ModelID1, ModelID2] find_scaled_shape_models (Image, ModelIDs, -0.39, 0.78, 0.9, 1.1, 0.5, 1, \ 0.5, 'least_squares', 0, 0.9, Row, Column, Angle,\ Scale, Score, Model) * Display results dev_display_shape_matching_results (ModelIDs, 'red', Row, Column, Angle, \ Scale, 1, Model)
If the parameter values are correct, the operator
find_scaled_shape_models
returns the value TRUE. If the
input is empty (no input images are available) the behavior can be
set via set_system('no_object_result',<Result>)
. If
necessary, an exception is raised.
add_channels
,
create_scaled_shape_model
,
read_shape_model
,
set_shape_model_origin
,
set_shape_model_clutter
find_shape_models
,
find_aniso_shape_models
,
find_shape_model
,
find_scaled_shape_model
,
find_aniso_shape_model
,
find_ncc_model
,
find_ncc_models
set_system
,
get_system
,
set_shape_model_param
Matching