decode_structured_light_patternT_decode_structured_light_patternDecodeStructuredLightPatternDecodeStructuredLightPatterndecode_structured_light_pattern (Operator)

Name

decode_structured_light_patternT_decode_structured_light_patternDecodeStructuredLightPatternDecodeStructuredLightPatterndecode_structured_light_pattern — Decode the camera images acquired with a structured light setup.

Signature

decode_structured_light_pattern(CameraImages : : StructuredLightModel : )

Herror T_decode_structured_light_pattern(const Hobject CameraImages, const Htuple StructuredLightModel)

void DecodeStructuredLightPattern(const HObject& CameraImages, const HTuple& StructuredLightModel)

void HStructuredLightModel::DecodeStructuredLightPattern(const HImage& CameraImages) const

static void HOperatorSet.DecodeStructuredLightPattern(HObject cameraImages, HTuple structuredLightModel)

void HStructuredLightModel.DecodeStructuredLightPattern(HImage cameraImages)

def decode_structured_light_pattern(camera_images: HObject, structured_light_model: HHandle) -> None

Description

decode_structured_light_patterndecode_structured_light_patternDecodeStructuredLightPatternDecodeStructuredLightPatternDecodeStructuredLightPatterndecode_structured_light_pattern decodes the camera images CameraImagesCameraImagesCameraImagesCameraImagescameraImagescamera_images that have been previously acquired with a structured light setup. The correspondence images and other intermediate results that are created by the decoding process are stored in the model StructuredLightModelStructuredLightModelStructuredLightModelStructuredLightModelstructuredLightModelstructured_light_model and can be accessed afterwards using the operator get_structured_light_objectget_structured_light_objectGetStructuredLightObjectGetStructuredLightObjectGetStructuredLightObjectget_structured_light_object.

In the following, the decoding process is explained in detail:

As mentioned in gen_structured_light_patterngen_structured_light_patternGenStructuredLightPatternGenStructuredLightPatternGenStructuredLightPatterngen_structured_light_pattern the first purpose is to find out whether a pixel is in a region where a light stripe is reflected or where a dark stripe is reflected. To simplify this decision process the normalization images are used and a locally varying threshold is determined that is able to cope with objects of varying reflectance and lighting conditions. During the decoding of the acquired camera images all Gray code images are then compared with the previously calculated threshold. A pixel within the image is classified as bright if its gray value is greater or equal this threshold.

Furthermore, the pattern region is segmented during the decoding process. The segmentation is controlled by the parameter 'min_gray_difference'"min_gray_difference""min_gray_difference""min_gray_difference""min_gray_difference""min_gray_difference" (see set_structured_light_model_paramset_structured_light_model_paramSetStructuredLightModelParamSetStructuredLightModelParamSetStructuredLightModelParamset_structured_light_model_param).

Assuming that n Gray code images have been processed, we get a n-bit binary code for each pixel. From this sequence the row and column coordinates up to of the monitor can be derived.

If the StructuredLightModelStructuredLightModelStructuredLightModelStructuredLightModelstructuredLightModelstructured_light_model is a hybrid system consisting not only of Gray code images but also of phase shift images (see gen_structured_light_patterngen_structured_light_patternGenStructuredLightPatternGenStructuredLightPatternGenStructuredLightPatterngen_structured_light_pattern), the next step is to decode the latter ones. The result is a subpixel-precise correspondence image between the monitor coordinates and the camera coordinates that contains the information which camera pixel observes which monitor pixel.

If the 'pattern_type'"pattern_type""pattern_type""pattern_type""pattern_type""pattern_type" of the StructuredLightModelStructuredLightModelStructuredLightModelStructuredLightModelstructuredLightModelstructured_light_model is set to 'single_stripe'"single_stripe""single_stripe""single_stripe""single_stripe""single_stripe", the first step in the decoding process is to decide which single stripe shed its light on a camera pixel. The Gray code sequence and phase are then used to refine the position within the found single stripe.

In real world setups it may occur that the detected Gray code sequence of a pixel is wrong. This can then lead to values in the correspondence images which represent monitor rows or columns larger than the monitor width and height. To avoid these problems, the last step of the decoding process is to remove these values from the correspondence images.

Execution Information

This operator modifies the state of the following input parameter:

During execution of this operator, access to the value of this parameter must be synchronized if it is used across multiple threads.

Parameters

CameraImagesCameraImagesCameraImagesCameraImagescameraImagescamera_images (input_object)  (multichannel-)image(-array) objectHImageHObjectHImageHobject (byte / uint2)

Acquired camera images.

StructuredLightModelStructuredLightModelStructuredLightModelStructuredLightModelstructuredLightModelstructured_light_model (input_control, state is modified)  structured_light_model HStructuredLightModel, HTupleHHandleHTupleHtuple (handle) (IntPtr) (HHandle) (handle)

Handle of the structured light model.

Example (HDevelop)

* Create the model
create_structured_light_model ('deflectometry', StructuredLightModel)
* Set the size of the monitor
set_structured_light_model_param (StructuredLightModel, \
                                  'pattern_width', 1600)
set_structured_light_model_param (StructuredLightModel, \
                                  'pattern_height', 1200)
* Set the smallest width of the stripes in the pattern
set_structured_light_model_param (StructuredLightModel, \
                                  'min_stripe_width', 8)
* Generate the patterns to project
gen_structured_light_pattern (PatternImages, StructuredLightModel)
* Set the expected black/white contrast in the region of interest
set_structured_light_model_param (StructuredLightModel, \
                                  'min_gray_difference', 70)
* Decode the camera images
decode_structured_light_pattern (CameraImages, StructuredLightModel)
* Get the computed correspondences and defects
get_structured_light_object (CorrespondenceImages, StructuredLightModel, \
                             'correspondence_image')
set_structured_light_model_param (StructuredLightModel, 'derivative_sigma', \
                                  Sigma)
get_structured_light_object (DefectImage, StructuredLightModel, \
                             'defect_image')

Result

The operator decode_structured_light_patterndecode_structured_light_patternDecodeStructuredLightPatternDecodeStructuredLightPatternDecodeStructuredLightPatterndecode_structured_light_pattern returns the value TRUE if the given parameters are valid. Otherwise, an exception will be raised.

Possible Predecessors

gen_structured_light_patterngen_structured_light_patternGenStructuredLightPatternGenStructuredLightPatternGenStructuredLightPatterngen_structured_light_pattern

Possible Successors

get_structured_light_objectget_structured_light_objectGetStructuredLightObjectGetStructuredLightObjectGetStructuredLightObjectget_structured_light_object

See also

create_structured_light_modelcreate_structured_light_modelCreateStructuredLightModelCreateStructuredLightModelCreateStructuredLightModelcreate_structured_light_model, set_structured_light_model_paramset_structured_light_model_paramSetStructuredLightModelParamSetStructuredLightModelParamSetStructuredLightModelParamset_structured_light_model_param

Module

3D Metrology