optimize_dl_model_for_inferenceT_optimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference (Operator)

Name

optimize_dl_model_for_inferenceT_optimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference — Optimize a model for inference on a device via the AI2-interface.

Signature

optimize_dl_model_for_inference( : : DLModelHandle, DLDeviceHandle, Precision, DLSamples, GenParam : DLModelHandleConverted, ConversionReport)

Herror T_optimize_dl_model_for_inference(const Htuple DLModelHandle, const Htuple DLDeviceHandle, const Htuple Precision, const Htuple DLSamples, const Htuple GenParam, Htuple* DLModelHandleConverted, Htuple* ConversionReport)

void OptimizeDlModelForInference(const HTuple& DLModelHandle, const HTuple& DLDeviceHandle, const HTuple& Precision, const HTuple& DLSamples, const HTuple& GenParam, HTuple* DLModelHandleConverted, HTuple* ConversionReport)

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDeviceArray& DLDeviceHandle, const HString& Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const HString& Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const char* Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const

HDlModel HDlModel::OptimizeDlModelForInference(const HDlDevice& DLDeviceHandle, const wchar_t* Precision, const HDictArray& DLSamples, const HDict& GenParam, HDict* ConversionReport) const   (Windows only)

static void HOperatorSet.OptimizeDlModelForInference(HTuple DLModelHandle, HTuple DLDeviceHandle, HTuple precision, HTuple DLSamples, HTuple genParam, out HTuple DLModelHandleConverted, out HTuple conversionReport)

HDlModel HDlModel.OptimizeDlModelForInference(HDlDevice[] DLDeviceHandle, string precision, HDict[] DLSamples, HDict genParam, out HDict conversionReport)

HDlModel HDlModel.OptimizeDlModelForInference(HDlDevice DLDeviceHandle, string precision, HDict[] DLSamples, HDict genParam, out HDict conversionReport)

def optimize_dl_model_for_inference(dlmodel_handle: HHandle, dldevice_handle: MaybeSequence[HHandle], precision: str, dlsamples: Sequence[HHandle], gen_param: HHandle) -> Tuple[HHandle, HHandle]

Description

The operator optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference optimizes the input model DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle for inference on the device DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle and returns the optimized model in DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted. This operator has two distinct functionalities: Casting the model precision to PrecisionPrecisionPrecisionPrecisionprecisionprecision and calibrating the model based on the given samples DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples. Additionally in either case the model architecture may be optimized for the DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle.

The parameter DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle specifies the deep learning device for which the model is optimized. Whether the device supports optimization can be determined using get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param with 'conversion_supported'"conversion_supported""conversion_supported""conversion_supported""conversion_supported""conversion_supported". After a successful execution, optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference sets the parameter 'precision_is_converted'"precision_is_converted""precision_is_converted""precision_is_converted""precision_is_converted""precision_is_converted" to 'true'"true""true""true""true""true" for the output model DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted.

The parameter PrecisionPrecisionPrecisionPrecisionprecisionprecision specifies the precision to which the model should be converted to. By default, models that are delivery by HALCON have the PrecisionPrecisionPrecisionPrecisionprecisionprecision 'float32'"float32""float32""float32""float32""float32". The following values are supported for PrecisionPrecisionPrecisionPrecisionprecisionprecision:

The parameter DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples specifies the samples on which the calibration is based. As a consequence they should be representative. It is recommended to provide them from the training split. For most applications 10-20 samples per class are sufficient to achieve good results.

Note, the samples are not needed for a pure cast operation. In this case, an empty tuple can be passed over for DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples.

The parameter GenParamGenParamGenParamGenParamgenParamgen_param specifies additional, device specific parameters and their values. Which parameters to set for the given DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle in GenParamGenParamGenParamGenParamgenParamgen_param and their default values can be queried via the get_dl_device_paramget_dl_device_paramGetDlDeviceParamGetDlDeviceParamGetDlDeviceParamget_dl_device_param operator with the 'optimize_for_inference_params'"optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params""optimize_for_inference_params" parameter.

Note, certain devices also expect only an empty dictionary.

The parameter ConversionReportConversionReportConversionReportConversionReportconversionReportconversion_report returns a report dictionary with information about the conversion.

Attention

This operator can only be used via an AI2-interface. Furthermore, after optimization parameters of DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted can not be modified using set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param (with exception of 'runtime'"runtime""runtime""runtime""runtime""runtime" and 'device'"device""device""device""device""device"). When setting, the 'device'"device""device""device""device""device" or the 'runtime'"runtime""runtime""runtime""runtime""runtime" must be from the same AI2-interface that was used to optimize.

Execution Information

Parameters

DLModelHandleDLModelHandleDLModelHandleDLModelHandleDLModelHandledlmodel_handle (input_control)  dl_model HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Input model.

DLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandleDLDeviceHandledldevice_handle (input_control)  dl_device(-array) HDlDevice, HTupleMaybeSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Device handle used for optimization.

PrecisionPrecisionPrecisionPrecisionprecisionprecision (input_control)  string HTuplestrHTupleHtuple (string) (string) (HString) (char*)

Precision the model shall be converted to.

DLSamplesDLSamplesDLSamplesDLSamplesDLSamplesdlsamples (input_control)  dict-array HDict, HTupleSequence[HHandle]HTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Samples required for optimization.

GenParamGenParamGenParamGenParamgenParamgen_param (input_control)  dict HDict, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Parameter dict for optimization.

DLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConvertedDLModelHandleConverteddlmodel_handle_converted (output_control)  dl_model HDlModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Output model with new precision.

ConversionReportConversionReportConversionReportConversionReportconversionReportconversion_report (output_control)  dict HDict, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Output report for conversion.

Result

If the parameters are valid, the operator optimize_dl_model_for_inferenceoptimize_dl_model_for_inferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceOptimizeDlModelForInferenceoptimize_dl_model_for_inference returns the value TRUE. If necessary, an exception is raised.

Possible Predecessors

train_dl_model_batchtrain_dl_model_batchTrainDlModelBatchTrainDlModelBatchTrainDlModelBatchtrain_dl_model_batch, query_available_dl_devicesquery_available_dl_devicesQueryAvailableDlDevicesQueryAvailableDlDevicesQueryAvailableDlDevicesquery_available_dl_devices

Possible Successors

set_dl_model_paramset_dl_model_paramSetDlModelParamSetDlModelParamSetDlModelParamset_dl_model_param, apply_dl_modelapply_dl_modelApplyDlModelApplyDlModelApplyDlModelapply_dl_model

Module

Deep Learning Inference