apply_dl_modelT_apply_dl_modelApplyDlModelApplyDlModelapply_dl_model (Operator)

Name

apply_dl_modelT_apply_dl_modelApplyDlModelApplyDlModelapply_dl_model — Apply a deep-learning-based network on a set of images for inference.

Signature

apply_dl_model( : : DLModelHandle, DLSampleBatch, Outputs : DLResultBatch)

Herror T_apply_dl_model(const Htuple DLModelHandle, const Htuple DLSampleBatch, const Htuple Outputs, Htuple* DLResultBatch)

void ApplyDlModel(const HTuple& DLModelHandle, const HTuple& DLSampleBatch, const HTuple& Outputs, HTuple* DLResultBatch)

HDictArray HDlModel::ApplyDlModel(const HDictArray& DLSampleBatch, const HTuple& Outputs) const

static void HOperatorSet.ApplyDlModel(HTuple DLModelHandle, HTuple DLSampleBatch, HTuple outputs, out HTuple DLResultBatch)

HDict[] HDlModel.ApplyDlModel(HDict[] DLSampleBatch, HTuple outputs)

def apply_dl_model(dlmodel_handle: HHandle, dlsample_batch: Sequence[HHandle], outputs: Sequence[str]) -> Sequence[HHandle]

Description

apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model applies the deep-learning-based network given by DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle on the batch of input images handed over through the tuple of dictionaries DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch. The operator returns DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch, a tuple with a result dictionary DLResultDLResultDLResultDLResultDLResultdlresult for every input image.

Please see the chapter Deep Learning / Model for more information on the concept and the dictionaries of the deep learning model in HALCON.

In order to apply the network on images, you have to hand them over through a tuple of dictionaries DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch, where a dictionary refers to a single image. You can create such a dictionary conveniently using the procedure gen_dl_samples_from_images. The tuple DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch can contain an arbitrary number of dictionaries. The operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model always processes a batch with up to 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" images simultaneously. In case the tuple contains more images, apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model iterates over the necessary number of batches internally. For a DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch with less than 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" images, the tuple is padded to a full batch which means that the time required to process a DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch is independent of whether the batch is filled up or just consists of a single image. This also means that if fewer images than 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" are processed in one operator call, the network still requires the same amount of memory as for a full batch. The current value of 'batch_size'"batch_size""batch_size""batch_size""batch_size""batch_size" can be retrieved using get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param.

Note that the images might have to be preprocessed before feeding them into the operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model in order to fulfill the network requirements. You can retrieve the current requirements of your network, such as e.g., the image dimensions, using get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param. The procedure preprocess_dl_dataset provides guidance on how to implement such a preprocessing stage.

The results are returned in DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch, a tuple with a dictionary DLResultDLResultDLResultDLResultDLResultdlresult for every input image. Please see the chapter Deep Learning / Model for more information to the output dictionaries in DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch and their keys. In OutputsOutputsOutputsOutputsoutputsoutputs you can specify, which output data is returned in DLResultDLResultDLResultDLResultDLResultdlresult. OutputsOutputsOutputsOutputsoutputsoutputs can be a single string, a tuple of strings, or an empty tuple with which you retrieve all possible outputs. If apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model is used with an AI2-interface, it might be required to set 'is_inference_output'"is_inference_output""is_inference_output""is_inference_output""is_inference_output""is_inference_output" = 'true'"true""true""true""true""true" for all requested layers in OutputsOutputsOutputsOutputsoutputsoutputs before the model is optimized for the AI2-interface, see optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference and set_dl_model_layer_paramset_dl_model_layer_paramSetDlModelLayerParamSetDlModelLayerParamSetDlModelLayerParamset_dl_model_layer_param for further details. The values for OutputsOutputsOutputsOutputsoutputsoutputs depend on the model type of your network:

Models of 'type'"type""type""type""type""type"='3d_gripping_point_detection'"3d_gripping_point_detection""3d_gripping_point_detection""3d_gripping_point_detection""3d_gripping_point_detection""3d_gripping_point_detection"

Models of 'type'"type""type""type""type""type"='anomaly_detection'"anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection""anomaly_detection"

Models of 'type'"type""type""type""type""type"='gc_anomaly_detection'"gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection""gc_anomaly_detection"

For each value of OutputsOutputsOutputsOutputsoutputsoutputs, DLResultDLResultDLResultDLResultDLResultdlresult contains an image where each pixel has the score of the according input image pixel. Additionally it contains a score for the entire image.

Models of 'type'"type""type""type""type""type"='classification'"classification""classification""classification""classification""classification"

Models of 'type'"type""type""type""type""type"='detection'"detection""detection""detection""detection""detection"

Models of 'type'"type""type""type""type""type"='ocr_recognition'"ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition""ocr_recognition"

Models of 'type'"type""type""type""type""type"='segmentation'"segmentation""segmentation""segmentation""segmentation""segmentation"

Attention

System requirements: To run this operator on GPU by setting 'device'"device""device""device""device""device" to 'gpu'"gpu""gpu""gpu""gpu""gpu" (see get_dl_model_paramget_dl_model_paramGetDlModelParamGetDlModelParamGetDlModelParamget_dl_model_param), cuDNN and cuBLAS are required. For further details, please refer to the “Installation Guide”, paragraph “Requirements for Deep Learning and Deep-Learning-Based Methods”.

Execution Information

This operator supports canceling timeouts and interrupts.

This operator supports breaking timeouts and interrupts.

Parameters

DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle (input_control)  dl_model HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the deep learning model.

DLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchDLSampleBatchdlsample_batch (input_control)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Input data.

OutputsOutputsOutputsOutputsoutputsoutputs (input_control)  string-array HTupleSequence[str]HTupleHtuple (string) (string) (HString) (char*)

Requested outputs.

Default value: []

List of values: [], 'bboxhead2_prediction'"bboxhead2_prediction""bboxhead2_prediction""bboxhead2_prediction""bboxhead2_prediction""bboxhead2_prediction", 'classhead2_prediction'"classhead2_prediction""classhead2_prediction""classhead2_prediction""classhead2_prediction""classhead2_prediction", 'segmentation_confidence'"segmentation_confidence""segmentation_confidence""segmentation_confidence""segmentation_confidence""segmentation_confidence", 'segmentation_image'"segmentation_image""segmentation_image""segmentation_image""segmentation_image""segmentation_image"

DLResultBatchDLResultBatchDLResultBatchDLResultBatchDLResultBatchdlresult_batch (output_control)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Result data.

Result

If the parameters are valid, the operator apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model returns the value 2 (H_MSG_TRUE). If necessary, an exception is raised.

Possible Predecessors

read_dl_modelread_dl_modelReadDlModelReadDlModelReadDlModelread_dl_model, train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch, train_dl_model_anomaly_datasettrain_dl_model_anomaly_datasetTrainDlModelAnomalyDatasetTrainDlModelAnomalyDatasetTrainDlModelAnomalyDatasettrain_dl_model_anomaly_dataset, set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param

Module

Foundation. This operator uses dynamic licensing (see the ``Installation Guide''). Which of the following modules is required depends on the specific usage of the operator:
3D Metrology, OCR/OCV, Deep Learning Inference