get_dl_model_paramT_get_dl_model_paramGetDlModelParamGetDlModelParam (Operator)

Name

get_dl_model_paramT_get_dl_model_paramGetDlModelParamGetDlModelParam — Return the parameters of a deep learning model.

Signature

get_dl_model_param( : : DLModelHandle, GenParamName : GenParamValue)

Herror T_get_dl_model_param(const Htuple DLModelHandle, const Htuple GenParamName, Htuple* GenParamValue)

void GetDlModelParam(const HTuple& DLModelHandle, const HTuple& GenParamName, HTuple* GenParamValue)

HTuple HDlModel::GetDlModelParam(const HString& GenParamName) const

HTuple HDlModel::GetDlModelParam(const char* GenParamName) const

HTuple HDlModel::GetDlModelParam(const wchar_t* GenParamName) const   (Windows only)

static void HOperatorSet.GetDlModelParam(HTuple DLModelHandle, HTuple genParamName, out HTuple genParamValue)

HTuple HDlModel.GetDlModelParam(string genParamName)

Description

get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam returns the parameter values GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue of GenParamNameGenParamNameGenParamNameGenParamNamegenParamName for the deep learning model DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandle.

For a deep learning model, parameters GenParamNameGenParamNameGenParamNameGenParamNamegenParamName can be set using set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam or create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection, depending on the parameter and the model type. With this operator, get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam, you can retrieve the parameter values GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue. Below we give an overview of the different parameters and an explanation, except of those you can only set. For latter ones, please see the documentation of corresponding operator.

GenParamNameGenParamNameGenParamNameGenParamNamegenParamName Object Detection Semantic Segmentation
create set get set get
'batch_size'"batch_size""batch_size""batch_size""batch_size" n y y y y
'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" n y y y y
'class_ids'"class_ids""class_ids""class_ids""class_ids" y y y y y
'gpu'"gpu""gpu""gpu""gpu" n y y y y
'image_dimensions'"image_dimensions""image_dimensions""image_dimensions""image_dimensions" y n y y y
'image_height'"image_height""image_height""image_height""image_height" y n y y y
'image_width'"image_width""image_width""image_width""image_width" y n y y y
'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels" y n y y y
'image_range_max'"image_range_max""image_range_max""image_range_max""image_range_max" n n y y y
'image_range_min'"image_range_min""image_range_min""image_range_min""image_range_min" n n y y y
'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate" n y y y y
'momentum'"momentum""momentum""momentum""momentum" n y y y y
'num_classes'"num_classes""num_classes""num_classes""num_classes" (NumClassesNumClassesNumClassesNumClassesnumClasses) y n y n y
'runtime'"runtime""runtime""runtime""runtime" n y y y y
'runtime_init'"runtime_init""runtime_init""runtime_init""runtime_init" n y n y n
'type'"type""type""type""type" n n y n y
'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" n y y y y
'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles" y n y - -
'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios" y n y - -
'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales" y n y - -
'backbone'"backbone""backbone""backbone""backbone" (BackboneBackboneBackboneBackbonebackbone) y n y - -
'bbox_heads_weight'"bbox_heads_weight""bbox_heads_weight""bbox_heads_weight""bbox_heads_weight" y n y - -
'capacity'"capacity""capacity""capacity""capacity" y n y - -
'class_heads_weight'"class_heads_weight""class_heads_weight""class_heads_weight""class_heads_weight" y n y - -
'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" y y y - -
'class_weights'"class_weights""class_weights""class_weights""class_weights" y n y - -
'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction" y n y - -
'instance_type'"instance_type""instance_type""instance_type""instance_type" y n y - -
'max_level'"max_level""max_level""max_level""max_level" y n y - -
'min_level'"min_level""min_level""min_level""min_level" y n y - -
'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections" y y y - -
'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap" y y y - -
'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic" y y y - -
'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" y y y - -
'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids" - - - y y

Thereby, 'set' denotes set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam, 'get' get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam, and 'create' create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection. We note 'y' if the operator can be used for this parameter and model, 'n' if not, and '-' if the parameter is not applicable for this type of model. Certain parameters are set as non-optional parameters, the corresponding notation is given in brackets.

In the following we list and explain the parameters GenParamNameGenParamNameGenParamNameGenParamNamegenParamName for which you can retrieve their value using this operator, get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam. They are sorted according to the model type. Note, for models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation" the default values depend on the specific network and therefore have to be retrieved.

Any Model
'batch_size'"batch_size""batch_size""batch_size""batch_size"

Number of input images (and corresponding labels) in a batch that is transferred to device memory. The batch of images which are processed simultaneously in a single training iteration contains a number of images which is equal to 'batch_size'"batch_size""batch_size""batch_size""batch_size" times 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier". Please refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch for further details. For inference, the 'batch_size'"batch_size""batch_size""batch_size""batch_size" can be generally set independently from the number of input images. See apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel for details on how to set this parameter for greater efficiency.

'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier"

Multiplier for 'batch_size'"batch_size""batch_size""batch_size""batch_size" to enable training with larger numbers of images in one step which would otherwise not be possible due to GPU memory limitations. This model parameter does not have any impact during evaluation and inference. For detailed information see train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch.

'class_ids'"class_ids""class_ids""class_ids""class_ids":

Unique IDs of the classes the model shall distinguish. Thereby, you can set any integer within the interval as class id value. The tuple is of length 'num_classes'"num_classes""num_classes""num_classes""num_classes".

We stress out the slightly different meanings and restrictions depending on the model type:

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection":

Only the classes of the objects to be detected are included and therewith no background class.

Note that the values of 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" depend on 'class_ids'"class_ids""class_ids""class_ids""class_ids". Thus if 'class_ids'"class_ids""class_ids""class_ids""class_ids" is changed after the creation of the model, 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" is reset to an empty tuple.

Default: 'class_ids'"class_ids""class_ids""class_ids""class_ids" = '[0,...,num_classes-1]'"[0,...,num_classes-1]""[0,...,num_classes-1]""[0,...,num_classes-1]""[0,...,num_classes-1]"

Models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation":

Every class used for training has to be included and therewith also the class ID of the 'background' class. Therefore, for such a model the tuple has a minimal length of 2.

'gpu'"gpu""gpu""gpu""gpu":

Identifier of the GPU where the training and inference operators (train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch and apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel) are executed. Per default, the first available GPU is used. get_systemget_systemGetSystemGetSystemGetSystem with 'cuda_devices'"cuda_devices""cuda_devices""cuda_devices""cuda_devices" can be used to retrieve a list of available GPUs. Pass the index in this list to 'gpu'"gpu""gpu""gpu""gpu".

Default: 'gpu'"gpu""gpu""gpu""gpu" = 0

'image_dimension'"image_dimension""image_dimension""image_dimension""image_dimension":

Tuple containing the input image dimensions 'image_width'"image_width""image_width""image_width""image_width", 'image_height'"image_height""image_height""image_height""image_height", and number of channels 'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels".

The respective default values and possible value ranges depend on the model and model type. Please see the individual dimension parameter description for more details.

'image_height'"image_height""image_height""image_height""image_height", 'image_width'"image_width""image_width""image_width""image_width":

Height and width of the input images, respectively, that the network will process.

This parameter can attain different values depending on the model type:

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection":

The network architectures allow changes of the image dimensions. But the image lengths are halved for every level, thats why the dimensions 'image_width'"image_width""image_width""image_width""image_width" and 'image_height'"image_height""image_height""image_height""image_height" need to be an integer multiple of . depends on the 'backbone'"backbone""backbone""backbone""backbone" and the parameter 'max_level'"max_level""max_level""max_level""max_level", see create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection for further information.

Default: 'image_height'"image_height""image_height""image_height""image_height" = 640, 'image_width'"image_width""image_width""image_width""image_width" = 640

Models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation":

The network architectures allow changes of the image dimensions.

The default and minimal values are given by the network, see read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel.

'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels":

Number of channels of the input images the network will process. The default value is given by the network, see read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel and create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection.

Any number of input image channels is possible.

If number of channels is changed to a value >1, the weights of the first layers after the input image layer will be initialized with random values. Note, in this case more data for the retraining is needed. If the number of channels is changed to 1, the weights of the concerned layers are fused.

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection":

Default: 'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels" = 3

'image_range_max'"image_range_max""image_range_max""image_range_max""image_range_max", 'image_range_min'"image_range_min""image_range_min""image_range_min""image_range_min":

Maximum and minimum gray value of the input images, respectively, the network will process.

The default values are given by the network, see read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel and create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection.

'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate":

Value of the factor determining the gradient influence during training. Please refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch for further details. The default values depend on the model.

'momentum'"momentum""momentum""momentum""momentum":

When updating the weights of the network, the hyperparameter 'momentum'"momentum""momentum""momentum""momentum" specifies to which extent previous updating vectors will be added to the current updating vector. Please refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch for further details. The default value is given by the model.

'num_classes'"num_classes""num_classes""num_classes""num_classes":

Number of distinct classes that the model is able to distinguish for its predictions.

This parameter differs between the models. For a model of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection" the 'background' class is not included, as background is not predicted by a detector. Also, this parameter is set as NumClassesNumClassesNumClassesNumClassesnumClasses over create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection and 'class_ids'"class_ids""class_ids""class_ids""class_ids" always needs to have a number of entries equal 'num_classes'"num_classes""num_classes""num_classes""num_classes". But a model of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation" does predict background and therefore in this case the 'background' class is included in 'num_classes'"num_classes""num_classes""num_classes""num_classes". For these models, 'num_classes'"num_classes""num_classes""num_classes""num_classes" is determined implicitly by the length of 'class_ids'"class_ids""class_ids""class_ids""class_ids".

'runtime'"runtime""runtime""runtime""runtime":

Defines the device on which the operators will be executed. Default: 'runtime'"runtime""runtime""runtime""runtime" = 'gpu'"gpu""gpu""gpu""gpu"

'cpu'"cpu""cpu""cpu""cpu":

The operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel will be executed on the CPU, whereas the operator train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch is not executable.

In case the GPU has been used before, CPU memory is initialized, and if necessary values stored on the GPU memory are moved to the CPU memory.

On Intel or AMD architectures the 'cpu'"cpu""cpu""cpu""cpu" runtime uses OpenMP for the parallelization of apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel, where per default, all threads available to the OpenMP runtime are used. You may use the set_systemset_systemSetSystemSetSystemSetSystem parameter 'tsp_thread_num'"tsp_thread_num""tsp_thread_num""tsp_thread_num""tsp_thread_num" to specify the number of threads.

On Arm architectures the 'cpu'"cpu""cpu""cpu""cpu" runtime uses a global thread pool. You may specify the number of threads with the set_systemset_systemSetSystemSetSystemSetSystem parameter 'thread_num'"thread_num""thread_num""thread_num""thread_num". You cannot specify a thread specific number of threads on Arm architectures.

'gpu'"gpu""gpu""gpu""gpu":

The GPU memory is initialized. The operators apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel and train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch will be executed on the GPU. For the specific requirements please refer to the HALCON “Installation Guide”.

'type'"type""type""type""type":

This parameter returns the model type. The following types are distinguished: 'detection'"detection""detection""detection""detection" and 'segmentation'"segmentation""segmentation""segmentation""segmentation".

'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior":

Regularization parameter used for the regularization of the loss function. For a detailed description of the regularization term we refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch. Simply put: Regularization favors simpler models that are less likely to learn noise in the data and generalize better. Per default no regularization is used, i.e. 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" is set to 0.0. In case the classifier overfits the data, it is strongly recommended to try different values for the parameter 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" to improve the generalization properties of the neural network. Choosing its value is a trade-off between the models ability to generalize, overfitting, and underfitting. If is too small, the model might overfit, if its too large the model might loose its ability to fit the data, because all weights are effectively zero. For finding an ideal value for , we recommend a cross-validation, i.e. to perform the training for a range of values and choose the value that results in the best validation error. For typical applications, we recommend testing the values for 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" on a logarithmic scale between . If the training takes a very long time, one might consider performing the hyperparameter optimization on a reduced amount of data.

Default: 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior" = 0.0

Models of 'type'"type""type""type""type"='detection'"detection""detection""detection""detection"
'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles":

The parameter 'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles" determines the orientation angle of the anchors for a model of 'instance_type'"instance_type""instance_type""instance_type""instance_type" = 'rectangle2'"rectangle2""rectangle2""rectangle2""rectangle2".

Thereby, the orientation is given in arc measure and indicates the angle between the horizontal axis and 'Length1'"Length1""Length1""Length1""Length1" (mathematically positive). See the chapter Deep Learning / Object Detection for more explanations to anchors.

You can set a tuple of values. A higher number of angles increases the number of anchors which might lead to a better localization but also increases the runtime and memory-consumption.

Assertion: 'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles" for 'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction" = 'false'"false""false""false""false", 'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles" for 'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction" = 'true'"true""true""true""true"

Default: 'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles" = '[0.0]'"[0.0]""[0.0]""[0.0]""[0.0]"

'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios" (legacy: 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios"):

The parameter 'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios" determines the aspect ratio of the anchors. Thereby, the definition of the ratio depends on the 'instance_type'"instance_type""instance_type""instance_type""instance_type":

  • 'rectangle1'"rectangle1""rectangle1""rectangle1""rectangle1": height-to-width ratio

  • 'rectangle2'"rectangle2""rectangle2""rectangle2""rectangle2": ration length1 to length2

E.g., for instance type 'rectangle1'"rectangle1""rectangle1""rectangle1""rectangle1" the ratio 2 gives a narrow and 0.5 a broad anchor. The size of the anchor is affected by the parameter 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales" and with its explanation we give the formula for the sizes and lengths of the generated anchors. See the chapter Deep Learning / Object Detection for more explanations to anchors.

You can set a tuple of values. A higher number of aspect ratios increases the number of anchors which might lead to a better localization but also increases the runtime and memory-consumption.

For reasons of backward compatibility, the parameter name 'aspect_ratios'"aspect_ratios""aspect_ratios""aspect_ratios""aspect_ratios" can be used instead of 'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios".

Default: 'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios" = '[1.0, 2.0, 0.5]'"[1.0, 2.0, 0.5]""[1.0, 2.0, 0.5]""[1.0, 2.0, 0.5]""[1.0, 2.0, 0.5]"

'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales" (legacy: 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales"):

This parameter determines the number of different sizes with which the anchors are generated at the different levels used.

In HALCON for every anchor point, thus every pixel of every feature map of the feature pyramid, a set of anchors is proposed. See the chapter Deep Learning / Object Detection for more explanations to anchors. Thereby the parameter 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales" affects the size of the anchors. An example is shown in the figure below.

image/svg+xml
With 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales"=2 we generate for every aspect ratio 2 anchors of different size on each level: One with the base length (solid line) and an additional, larger one (dotted line). Thereby, in the image these additional anchors of the lower level (orange) converge to the anchor of the next higher level (blue).

An anchor of level has by default a size of in the input image, whereby the paramter has the value . With the parameter 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales" additional anchors can be generated, which converge in size to the smallest anchor of the level . More precisely, these anchors of level have in the input image the size where . For subscale , this results on level in an anchor of height and width equal where is the ratio of this anchor (see 'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios").

A larger number of subscales increases the number of anchors and will therefore increase the runtime and memory-consumption.

For reasons of backward compatibility, the parameter name 'num_subscales'"num_subscales""num_subscales""num_subscales""num_subscales" can be used instead of 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales".

Default: 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales" = 3

'backbone'"backbone""backbone""backbone""backbone":

The parameter 'backbone'"backbone""backbone""backbone""backbone" is the name (together with the path) of the backbone network which is used to create the model. A list of the delivered backbone networks can be found under create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetection.

'bbox_heads_weight'"bbox_heads_weight""bbox_heads_weight""bbox_heads_weight""bbox_heads_weight", 'class_heads_weight'"class_heads_weight""class_heads_weight""class_heads_weight""class_heads_weight"

The parameters 'bbox_heads_weight'"bbox_heads_weight""bbox_heads_weight""bbox_heads_weight""bbox_heads_weight" and 'class_heads_weight'"class_heads_weight""class_heads_weight""class_heads_weight""class_heads_weight" are weighting factors for the calculation of the total loss. This means, when the losses of the individual networks are summed up, the contributions from the bounding box regression heads are weighted by a factor 'bbox_heads_weight'"bbox_heads_weight""bbox_heads_weight""bbox_heads_weight""bbox_heads_weight" and the contributions from the classification heads are weighted by a factor 'class_heads_weight'"class_heads_weight""class_heads_weight""class_heads_weight""class_heads_weight".

Default: 'bbox_heads_weight'"bbox_heads_weight""bbox_heads_weight""bbox_heads_weight""bbox_heads_weight" = '1.0'"1.0""1.0""1.0""1.0", 'class_heads_weight'"class_heads_weight""class_heads_weight""class_heads_weight""class_heads_weight" = '1.0'"1.0""1.0""1.0""1.0"

'capacity'"capacity""capacity""capacity""capacity":

This parameter roughly determines the number of parameters (or filter weights) in the deeper sections of the object detection network (after the backbone). Its possible values are 'high'"high""high""high""high", 'medium'"medium""medium""medium""medium", and 'low'"low""low""low""low".

It can be used to trade-off between detection performance and speed. For simpler object detection tasks, the 'low'"low""low""low""low" or 'medium'"medium""medium""medium""medium" settings may be sufficient to achieve the same detection performance as with 'high'"high""high""high""high".

Default: 'capacity'"capacity""capacity""capacity""capacity" = 'high'"high""high""high""high"

'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation":

With this parameter you can declare classes, for which the orientation will not be considered, e.g., round or other point symmetrical objects. For each class, whose class ID is present in 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation", the network returns axis-aligned bounding boxes.

Note, this parameter only affects networks of 'instance_type'"instance_type""instance_type""instance_type""instance_type" = 'rectangle2'"rectangle2""rectangle2""rectangle2""rectangle2".

Note that the values of 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" depend on 'class_ids'"class_ids""class_ids""class_ids""class_ids". Thus if 'class_ids'"class_ids""class_ids""class_ids""class_ids" is changed after the creation of the model, 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" is reset to an empty tuple.

Default: 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" = []

'class_weights'"class_weights""class_weights""class_weights""class_weights":

The parameter 'class_weights'"class_weights""class_weights""class_weights""class_weights" is a tuple of class specific weighting factors for the loss. Giving the unique classes a different weight, it is possible to force the network to learn the classes with different importance. This is useful in cases where a class dominates the dataset. The weighting factors have to be within the interval . Thereby a class gets a stronger impact during the training the larger its weight is. The weights in the tuple 'class_weights'"class_weights""class_weights""class_weights""class_weights" are sorted the same way as the classes in the tuple 'class_ids'"class_ids""class_ids""class_ids""class_ids". One exception is the case where all classes have the same value 'class_weights'"class_weights""class_weights""class_weights""class_weights", in this case you will get the value as a single number.

Default: 'class_weights'"class_weights""class_weights""class_weights""class_weights" = 0.25 (for each class).

'instance_type'"instance_type""instance_type""instance_type""instance_type":

The parameter 'instance_type'"instance_type""instance_type""instance_type""instance_type" determines, which instance type is used for the object model. The current implementations differ regarding the allowed orientations of the bounding boxes. See the chapter Deep Learning / Object Detection for more explanations to the different types and their bounding boxes.

Possible values: 'rectangle1'"rectangle1""rectangle1""rectangle1""rectangle1", 'rectangle2'"rectangle2""rectangle2""rectangle2""rectangle2"

Default: 'instance_type'"instance_type""instance_type""instance_type""instance_type" = 'rectangle1'"rectangle1""rectangle1""rectangle1""rectangle1"

'max_level'"max_level""max_level""max_level""max_level", 'min_level'"min_level""min_level""min_level""min_level":

These parameters determine on which levels the additional networks are attached on the feature pyramid. We refer to the chapter Deep Learning / Object Detection for further explanations to the feature pyramid and the attached networks.

From these ('max_level'"max_level""max_level""max_level""max_level" - 'min_level'"min_level""min_level""min_level""min_level" + 1) networks all predictions with a minimum confidence value are kept as long they do not strongly overlap (see 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" and 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap").

The level declares how often the size of the feature map already has been scaled down. Thus, level 0 corresponds to the feature maps with size of the input image, level 1 to feature maps subscaled once, and so on. As a consequence, smaller objects are detected in the lower levels, whereas larger objects are detected in higher levels.

The value for 'min_level'"min_level""min_level""min_level""min_level" needs to be at least 2.

If 'max_level'"max_level""max_level""max_level""max_level" is larger than the number of levels the backbone can provide, the backbone is extended with additional (randomly initialized) convolutional layers in order to generate deeper levels. Further, 'max_level'"max_level""max_level""max_level""max_level" may have an influence on the minimal input image size.

Note, for small input image dimensions, high levels might not be meaningful, as the feature maps could already be too small to contain meaningful information.

A higher number of used levels might increases the runtime and memory-consumption, whereby especially lower levels carry weight.

Default: 'max_level'"max_level""max_level""max_level""max_level" = 6, 'min_level'"min_level""min_level""min_level""min_level" = 2

'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections":

This parameter determines the maximum number of detections (bounding boxes) per image proposed from the network.

Default: 'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections" = 100

'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap":

The maximum allowed intersection over union (IoU) for two predicted bounding boxes of the same class. Or, vice-versa, when two bounding boxes are classified into the same class and have an IoU higher than 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap", the one with lower confidence value gets suppressed. We refer to the chapter Deep Learning / Object Detection for further explanations to the IoU.

Default: 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap" = 0.5

'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic":

The maximum allowed intersection over union (IoU) for two predicted bounding boxes independently of their predicted classes. Or, vice-versa, when two bounding boxes have an IoU higher than 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic", the one with lower confidence value gets suppressed. As default, 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic" is set to 1.0, hence class agnostic bounding box suppression has no influence.

Default: 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic" = 1.0

'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence":

This parameter determines the minimum confidence, when the image part within the bounding box is classified in order to keep the proposed bounding box. This means, when apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel is called, all output bounding boxes with a confidence value smaller than 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" are suppressed.

Default: 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence" = 0.5

'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction":

This parameter determines whether for the oriented bounding box also the direction of the object within the bounding box is considered or not. In case the direction within the bounding box is not to be considered you can set 'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction" to 'true'"true""true""true""true". In order to determine the bounding box unambiguously, in this case (but only in this case) the following conventions apply:

  • 'phi'"phi""phi""phi""phi"

  • 'bbox_length1'"bbox_length1""bbox_length1""bbox_length1""bbox_length1" > 'bbox_length2'"bbox_length2""bbox_length2""bbox_length2""bbox_length2"

This is consistent to smallest_rectangle2smallest_rectangle2SmallestRectangle2SmallestRectangle2SmallestRectangle2.

Note, this parameter only affects networks of 'instance_type'"instance_type""instance_type""instance_type""instance_type" = 'rectangle2'"rectangle2""rectangle2""rectangle2""rectangle2".

Possible values: 'true'"true""true""true""true", 'false'"false""false""false""false"

Default: 'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction" = 'false'"false""false""false""false"

Models of 'type'"type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation"
'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids":

With this parameter you can declare one or multiple classes as 'ignore' classes, see the chapter Deep Learning / Semantic Segmentation for further information. These classes are declared over their ID (integers).

Note, you can not set a class ID in 'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids" and 'class_ids'"class_ids""class_ids""class_ids""class_ids" simultaneously.

Execution Information

Parameters

DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandle (input_control)  dl_model HDlModel, HTupleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the deep learning model.

GenParamNameGenParamNameGenParamNameGenParamNamegenParamName (input_control)  attribute.name HTupleHTupleHtuple (string) (string) (HString) (char*)

Name of the generic parameter.

Default value: 'batch_size' "batch_size" "batch_size" "batch_size" "batch_size"

List of values: 'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles", 'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios", 'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales", 'backbone'"backbone""backbone""backbone""backbone", 'batch_size'"batch_size""batch_size""batch_size""batch_size", 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier", 'capacity'"capacity""capacity""capacity""capacity", 'class_ids'"class_ids""class_ids""class_ids""class_ids", 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation", 'class_weights'"class_weights""class_weights""class_weights""class_weights", 'classes'"classes""classes""classes""classes", 'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids", 'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction", 'image_dimensions'"image_dimensions""image_dimensions""image_dimensions""image_dimensions", 'image_height'"image_height""image_height""image_height""image_height", 'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels", 'image_range_max'"image_range_max""image_range_max""image_range_max""image_range_max", 'image_range_min'"image_range_min""image_range_min""image_range_min""image_range_min", 'image_width'"image_width""image_width""image_width""image_width", 'instance_type'"instance_type""instance_type""instance_type""instance_type", 'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate", 'max_level'"max_level""max_level""max_level""max_level", 'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections", 'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap", 'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic", 'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence", 'min_level'"min_level""min_level""min_level""min_level", 'momentum'"momentum""momentum""momentum""momentum", 'num_classes'"num_classes""num_classes""num_classes""num_classes", 'runtime'"runtime""runtime""runtime""runtime", 'runtime_init'"runtime_init""runtime_init""runtime_init""runtime_init", 'summary'"summary""summary""summary""summary", 'type'"type""type""type""type", 'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior"

GenParamValueGenParamValueGenParamValueGenParamValuegenParamValue (output_control)  attribute.name(-array) HTupleHTupleHtuple (integer / string / real) (int / long / string / double) (Hlong / HString / double) (Hlong / char* / double)

Value of the generic parameter.

Result

If the parameters are valid, the operator get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParam returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.

Possible Predecessors

read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModel, set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam

Possible Successors

set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam, apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModel, train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatch

See also

set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParam

Module

Deep Learning Inference