get_dl_model_paramT_get_dl_model_paramGetDlModelParamGetDlModelParamget_dl_model_param (Operator)

Name

get_dl_model_paramT_get_dl_model_paramGetDlModelParamGetDlModelParamget_dl_model_param — Return the parameters of a deep learning model.

Signature

get_dl_model_param( : : DLModelHandle, GenParamName : GenParamValue)

Herror T_get_dl_model_param(const Htuple DLModelHandle, const Htuple GenParamName, Htuple* GenParamValue)

void GetDlModelParam(const HTuple& DLModelHandle, const HTuple& GenParamName, HTuple* GenParamValue)

HTuple HDlModel::GetDlModelParam(const HString& GenParamName) const

HTuple HDlModel::GetDlModelParam(const char* GenParamName) const

HTuple HDlModel::GetDlModelParam(const wchar_t* GenParamName) const   (Windows only)

static void HOperatorSet.GetDlModelParam(HTuple DLModelHandle, HTuple genParamName, out HTuple genParamValue)

HTuple HDlModel.GetDlModelParam(string genParamName)

def get_dl_model_param(dlmodel_handle: HHandle, gen_param_name: str) -> Sequence[Union[str, float, int]]

def get_dl_model_param_s(dlmodel_handle: HHandle, gen_param_name: str) -> Union[str, float, int]

Description

get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param returns the parameter values GenParamValueGenParamValueGenParamValueGenParamValuegenParamValuegen_param_value of GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name for the deep learning model DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle.

For a deep learning model, parameters GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name can be set using set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param or create_dl_model_detectioncreate_dl_model_detectionCreateDlModelDetectionCreateDlModelDetectionCreateDlModelDetectioncreate_dl_model_detection, depending on the parameter and the model type. With this operator, get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param, you can retrieve the parameter values GenParamValueGenParamValueGenParamValueGenParamValuegenParamValuegen_param_value. Below we give an overview of the different parameters and an explanation, except of those you can only set. For latter ones, please see the documentation of corresponding operator. The parameters are listed for each model type. Thereby, the symbols denote the model type for which the parameter can be get or set and has a possible influence, corresponding to the deep learning model methods:

'AD'"AD""AD""AD""AD""AD":

'type'"type""type""type""type""type"='anomaly_detection'"anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection" (Anomaly Detection)

'CL'"CL""CL""CL""CL""CL":

'type'"type""type""type""type""type"='classification'"classification""classification""classification""classification""classification" (Classification)

'DO'"DO""DO""DO""DO""DO":

'type'"type""type""type""type""type"='ocr_recognition'"ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition" (Deep OCR recognition component)

'GC-AD'"GC-AD""GC-AD""GC-AD""GC-AD""GC-AD":

'type'"type""type""type""type""type"='gc_anomaly_detection'"gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection" (Global Context Anomaly Detection)

'OD'"OD""OD""OD""OD""OD":

'type'"type""type""type""type""type"='detection'"detection""detection""detection""detection""detection" (Object Detection, Instance Segmentation)

'SE'"SE""SE""SE""SE""SE":

'type'"type""type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation""segmentation" (Semantic Segmentation, Edge Extraction)

GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name 'AD'"AD""AD""AD""AD""AD" 'CL'"CL""CL""CL""CL""CL" 'DO'"DO""DO""DO""DO""DO" 'GC-AD'"GC-AD""GC-AD""GC-AD""GC-AD""GC-AD" 'OD'"OD""OD""OD""OD""OD" 'SE'"SE""SE""SE""SE""SE"
'adam_beta1'"adam_beta1""adam_beta1""adam_beta1""adam_beta1""adam_beta1"
'adam_beta2'"adam_beta2""adam_beta2""adam_beta2""adam_beta2""adam_beta2"
'adam_epsilon'"adam_epsilon""adam_epsilon""adam_epsilon""adam_epsilon""adam_epsilon"
'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size"
'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier"
'batchnorm_momentum'"batchnorm_momentum""batchnorm_momentum""batchnorm_momentum""batchnorm_momentum""batchnorm_momentum"
'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids"
'class_names'"class_names""class_names""class_names""class_names""class_names"
'class_weights'"class_weights""class_weights""class_weights""class_weights""class_weights"
'device'"device""device""device""device""device"
'enable_resizing'"enable_resizing""enable_resizing""enable_resizing""enable_resizing""enable_resizing"
'fuse_bn_relu'"fuse_bn_relu""fuse_bn_relu""fuse_bn_relu""fuse_bn_relu""fuse_bn_relu"
'fuse_conv_relu'"fuse_conv_relu""fuse_conv_relu""fuse_conv_relu""fuse_conv_relu""fuse_conv_relu"
'gpu'"gpu""gpu""gpu""gpu""gpu"
'image_dimensions'"image_dimensions""image_dimensions""image_dimensions""image_dimensions""image_dimensions"
'image_height'"image_height""image_height""image_height""image_height""image_height"
'image_width'"image_width""image_width""image_width""image_width""image_width"
'image_num_channels'"image_num_channels""image_num_channels""image_num_channels""image_num_channels""image_num_channels"
'image_range_max'"image_range_max""image_range_max""image_range_max""image_range_max""image_range_max"
'image_range_min'"image_range_min""image_range_min""image_range_min""image_range_min""image_range_min"
'input_dimensions'"input_dimensions""input_dimensions""input_dimensions""input_dimensions""input_dimensions"
'layer_names'"layer_names""layer_names""layer_names""layer_names""layer_names"
'learning_rate'"learning_rate""learning_rate""learning_rate""learning_rate""learning_rate"
'meta_data'"meta_data""meta_data""meta_data""meta_data""meta_data"
'momentum'"momentum""momentum""momentum""momentum""momentum"
'num_classes'"num_classes""num_classes""num_classes""num_classes""num_classes" (NumClassesNumClassesNumClassesNumClassesnumClassesnum_classes)
'num_trainable_params'"num_trainable_params""num_trainable_params""num_trainable_params""num_trainable_params""num_trainable_params"
'optimize_for_inference'"optimize_for_inference""optimize_for_inference""optimize_for_inference""optimize_for_inference""optimize_for_inference"
'precision'"precision""precision""precision""precision""precision"
'precision_is_converted'"precision_is_converted""precision_is_converted""precision_is_converted""precision_is_converted""precision_is_converted"
'runtime'"runtime""runtime""runtime""runtime""runtime"
'runtime_init'"runtime_init""runtime_init""runtime_init""runtime_init""runtime_init"
'solver_type'"solver_type""solver_type""solver_type""solver_type""solver_type"
'summary'"summary""summary""summary""summary""summary"
'type'"type""type""type""type""type"
'weight_prior'"weight_prior""weight_prior""weight_prior""weight_prior""weight_prior"
GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name 'AD'"AD""AD""AD""AD""AD" 'CL'"CL""CL""CL""CL""CL" 'DO'"DO""DO""DO""DO""DO" 'GC-AD'"GC-AD""GC-AD""GC-AD""GC-AD""GC-AD" 'OD'"OD""OD""OD""OD""OD" 'SE'"SE""SE""SE""SE""SE"
'complexity'"complexity""complexity""complexity""complexity""complexity"
'standard_deviation_factor'"standard_deviation_factor""standard_deviation_factor""standard_deviation_factor""standard_deviation_factor""standard_deviation_factor"
'extract_feature_maps'"extract_feature_maps""extract_feature_maps""extract_feature_maps""extract_feature_maps""extract_feature_maps"
'alphabet'"alphabet""alphabet""alphabet""alphabet""alphabet"
'alphabet_internal'"alphabet_internal""alphabet_internal""alphabet_internal""alphabet_internal""alphabet_internal"
'alphabet_mapping'"alphabet_mapping""alphabet_mapping""alphabet_mapping""alphabet_mapping""alphabet_mapping"
'anomaly_score_tolerance'"anomaly_score_tolerance""anomaly_score_tolerance""anomaly_score_tolerance""anomaly_score_tolerance""anomaly_score_tolerance"
'gc_anomaly_networks'"gc_anomaly_networks""gc_anomaly_networks""gc_anomaly_networks""gc_anomaly_networks""gc_anomaly_networks"
'patch_size'"patch_size""patch_size""patch_size""patch_size""patch_size"
'anchor_angles'"anchor_angles""anchor_angles""anchor_angles""anchor_angles""anchor_angles"
'anchor_aspect_ratios'"anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios""anchor_aspect_ratios"
'anchor_num_subscales'"anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales""anchor_num_subscales"
'backbone'"backbone""backbone""backbone""backbone""backbone" (BackboneBackboneBackboneBackbonebackbonebackbone)
'backbone_docking_layers'"backbone_docking_layers""backbone_docking_layers""backbone_docking_layers""backbone_docking_layers""backbone_docking_layers"
'bbox_heads_weight'"bbox_heads_weight""bbox_heads_weight""bbox_heads_weight""bbox_heads_weight""bbox_heads_weight"
'capacity'"capacity""capacity""capacity""capacity""capacity"
'class_heads_weight'"class_heads_weight""class_heads_weight""class_heads_weight""class_heads_weight""class_heads_weight"
'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation"
'freeze_backbone_level'"freeze_backbone_level""freeze_backbone_level""freeze_backbone_level""freeze_backbone_level""freeze_backbone_level"
'ignore_direction'"ignore_direction""ignore_direction""ignore_direction""ignore_direction""ignore_direction"
'instance_segmentation'"instance_segmentation""instance_segmentation""instance_segmentation""instance_segmentation""instance_segmentation"
'instance_type'"instance_type""instance_type""instance_type""instance_type""instance_type"
'max_level'"max_level""max_level""max_level""max_level""max_level"
'min_level'"min_level""min_level""min_level""min_level""min_level"
'max_num_detections'"max_num_detections""max_num_detections""max_num_detections""max_num_detections""max_num_detections"
'max_overlap'"max_overlap""max_overlap""max_overlap""max_overlap""max_overlap"
'max_overlap_class_agnostic'"max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic""max_overlap_class_agnostic"
'min_confidence'"min_confidence""min_confidence""min_confidence""min_confidence""min_confidence"
'mask_head_weight'"mask_head_weight""mask_head_weight""mask_head_weight""mask_head_weight""mask_head_weight"
'ignore_class_ids'"ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids""ignore_class_ids"

Thereby, the symbols denote the following :

Certain parameters are set as non-optional parameters, the corresponding notation is given in brackets.

In the following we list and explain the parameters GenParamNameGenParamNameGenParamNameGenParamNamegenParamNamegen_param_name for which you can retrieve their value using this operator, get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param. They are sorted according to the model type. Note, for models of 'type'"type""type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation""segmentation" the default values depend on the specific network and therefore have to be retrieved.

Applicable to several model types

'adam_beta1'"adam_beta1""adam_beta1""adam_beta1""adam_beta1""adam_beta1":

This value defines the moment for the linear term in Adam solver. For more information we refer to the documentation of train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch. Only applicable for 'solver_type'"solver_type""solver_type""solver_type""solver_type""solver_type" = 'adam'"adam""adam""adam""adam""adam".

Default: 'adam_beta1'"adam_beta1""adam_beta1""adam_beta1""adam_beta1""adam_beta1" = 0.9

'adam_beta2'"adam_beta2""adam_beta2""adam_beta2""adam_beta2""adam_beta2":

This value defines the moment for the quadratic term in Adam solver. For more information we refer to the documentation of train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch. Only applicable for 'solver_type'"solver_type""solver_type""solver_type""solver_type""solver_type" = 'adam'"adam""adam""adam""adam""adam".

Default: 'adam_beta2'"adam_beta2""adam_beta2""adam_beta2""adam_beta2""adam_beta2" = 0.999

'adam_epsilon'"adam_epsilon""adam_epsilon""adam_epsilon""adam_epsilon""adam_epsilon":

This value defines the epsilon in the Adam solver formula and is purposed to guarantee the numeric stability. For more information we refer to the documentation of train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch. Only applicable for 'solver_type'"solver_type""solver_type""solver_type""solver_type""solver_type" = 'adam'"adam""adam""adam""adam""adam".

Default: 'adam_epsilon'"adam_epsilon""adam_epsilon""adam_epsilon""adam_epsilon""adam_epsilon" = 1e-08

'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size"

Number of input images (and corresponding labels) in a batch that is transferred to device memory.

For a training using train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch, the batch of images which are processed simultaneously in a single training iteration contains a number of images which is equal to 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" times 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier". Please refer to train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch for further details.

For inference, the 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" can be generally set independently from the number of input images. See apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model for details on how to set this parameter for greater efficiency.

Models of 'type'"type""type""type""type""type"='classification'"classification""classification""classification""classification""classification":

The parameter 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" is stored in the pretrained classifier. Per default, the 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" is set such that a training of the pretrained classifier with up to 100 classes can be easily performed on a device with 8 gigabyte of memory.

For the pretrained classifiers, the default values are hence given as follows:
pretrained classifier default value of 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size"
'pretrained_dl_classifier_alexnet.hdl'"pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl""pretrained_dl_classifier_alexnet.hdl" 230
'pretrained_dl_classifier_compact.hdl'"pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl""pretrained_dl_classifier_compact.hdl" 160
'pretrained_dl_classifier_enhanced.hdl'"pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl""pretrained_dl_classifier_enhanced.hdl" 96
'pretrained_dl_classifier_mobilenet_v2.hdl'"pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl""pretrained_dl_classifier_mobilenet_v2.hdl" 40
'pretrained_dl_classifier_resnet50.hdl'"pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl""pretrained_dl_classifier_resnet50.hdl" 23
'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier"

Multiplier for 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" to enable training with larger numbers of images in one step which would otherwise not be possible due to GPU memory limitations. This model parameter does only affect train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch and thus has no impact during evaluation and inference. For detailed information see train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch.

Models of 'type'"type""type""type""type""type"='anomaly_detection'"anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection":

The parameter 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" has no effect.

Models of 'type'"type""type""type""type""type"='ocr_recognition'"ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition":

The parameter 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" has no effect.

Default: 'batch_size_multiplier'"batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier""batch_size_multiplier" = 1

'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids":

Unique IDs of the classes the model shall distinguish. The tuple is of length 'num_classes'"num_classes""num_classes""num_classes""num_classes""num_classes".

We stress out the slightly different meanings and restrictions depending on the model type:

Models of 'type'"type""type""type""type""type"='anomaly_detection'"anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection":

'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids" is not supported.

Models of 'type'"type""type""type""type""type"='classification'"classification""classification""classification""classification""classification":

The IDs are unique identifiers, which are automatically assigned to each class. The ID of a class corresponds to the index within the tuple 'class_names'"class_names""class_names""class_names""class_names""class_names".

Models of 'type'"type""type""type""type""type"='detection'"detection""detection""detection""detection""detection":

Only the classes of the objects to be detected are included and therewith no background class. Thereby, you can set any integer within the interval as class ID value.

Note that the values of 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" depend on 'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids". Thus if 'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids" is changed after the creation of the model, 'class_ids_no_orientation'"class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation""class_ids_no_orientation" is reset to an empty tuple.

Default: 'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids" = '[0,...,num_classes-1]'"[0,...,num_classes-1]""[0,...,num_classes-1]""[0,...,num_classes-1]""[0,...,num_classes-1]""[0,...,num_classes-1]"

Models of 'type'"type""type""type""type""type"='gc_anomaly_detection'"gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection":

'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids" is not supported.

Models of 'type'"type""type""type""type""type"='ocr_recognition'"ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition":

'class_ids'"class_ids""class_ids""class_ids""class_ids""class_ids" is not supported.

Models of 'type'"type""type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation""segmentation":

Every class used for training has to be included and therewith also the class ID of the 'background' class. Therefore, for such a model the tuple has a minimal length of 2. Thereby, you can set any integer within the interval