ClassesClassesClassesClasses | | | | Operators

calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras (Operator)

Name

calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras — Calibrate one ore more cameras.

Signature

calibrate_cameras( : : CalibDataID : Error)

Herror calibrate_cameras(const Hlong CalibDataID, double* Error)

Herror T_calibrate_cameras(const Htuple CalibDataID, Htuple* Error)

Herror calibrate_cameras(const HTuple& CalibDataID, double* Error)

double HCalibData::CalibrateCameras() const

void CalibrateCameras(const HTuple& CalibDataID, HTuple* Error)

double HCalibData::CalibrateCameras() const

void HOperatorSetX.CalibrateCameras(
[in] VARIANT CalibDataID, [out] VARIANT* Error)

double HCalibDataX.CalibrateCameras()

static void HOperatorSet.CalibrateCameras(HTuple calibDataID, out HTuple error)

double HCalibData.CalibrateCameras()

Description

The operator calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras calibrates a setup of one or more cameras based on the calibration data model CalibDataIDCalibDataIDCalibDataIDCalibDataIDCalibDataIDcalibDataID. The used calibration algorithm depends on the calibration setup type specified during the creation of the data model, see create_calib_datacreate_calib_dataCreateCalibDatacreate_calib_dataCreateCalibDataCreateCalibData. Currently, only the type 'calibration_object'"calibration_object""calibration_object""calibration_object""calibration_object""calibration_object" is supported.

In this setup, one or more cameras are calibrated, i.e. their internal parameters (e.g., their focal length) and their poses relative to the coordinate system of a so-called reference camera are calculated (see the paragraph "Used 3D camera model" for details of the used 3D point projection model). For this, one or more calibration objects (e.g., the HALCON calibration plate), are placed in front of the cameras. These objects have precisely known metric properties. The calibration objects are observed by the cameras in different calibration object poses, i.e., the cameras acquire an image of the calibration objects for each of their poses and extract metric information. Note that only cameras of the same type can be calibrated in a single setup. Furthermore line-scan cameras can be calibrated only one at a time.

The camera calibration corresponds to an optimization of the internal parameters and the poses of the cameras and of the calibration objects' poses such that the back projection of calibration object feature points into the modeled cameras fits the actual observed projections as well as possible. Note that the optimization needs an initial estimate for the internal camera parameters. In contrast, the initial poses of both cameras and calibration objects are not needed and remain at first undefined. Instead, for each observation a rough estimate of the pose of the calibration object relative to the observing camera is required. From these poses, the camera and calibration object poses are initialized in the first step of the camera calibration calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras (see the section "Performing camera calibration" for more details).

Preparing the calibration input data

Before calling calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras, you must create and fill the calibration data model with the following steps:

  1. Create a calibration data model with the operator create_calib_datacreate_calib_dataCreateCalibDatacreate_calib_dataCreateCalibDataCreateCalibData, specifying the number of cameras in the setup and the number of used calibration objects.

  2. Specify camera type and the initial internal camera parameters for all cameras with the operator set_calib_data_cam_paramset_calib_data_cam_paramSetCalibDataCamParamset_calib_data_cam_paramSetCalibDataCamParamSetCalibDataCamParam.

  3. Specify the description of all calibration objects with the operator set_calib_data_calib_objectset_calib_data_calib_objectSetCalibDataCalibObjectset_calib_data_calib_objectSetCalibDataCalibObjectSetCalibDataCalibObject.

  4. Collect observation data with the operator set_calib_data_observ_pointsset_calib_data_observ_pointsSetCalibDataObservPointsset_calib_data_observ_pointsSetCalibDataObservPointsSetCalibDataObservPoints, i.e., the image coordinates of the extracted calibration marks of the calibration object and a roughly estimated pose of the calibration object relative to observing camera.

  5. Configure the calibration process, e.g., specify the reference camera or exclude certain internal or external camera parameters from the optimization. With the operator set_calib_dataset_calib_dataSetCalibDataset_calib_dataSetCalibDataSetCalibData, you can specify parameters for the complete setup, or configure parameters of individual cameras or calibration objects poses in the setup.

    For example, if a certain camera parameter, for example the image sensor cell size, is known and only the rest of the parameters need to be calibrated, you call

         set_calib_data(CalibDataID,'camera','general','excluded_settings',['sx','sy']).
      

Performing the actual camera calibration

Depending on the camera type being calibrated in the setup, calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras performs the calibration in two different ways.

For projective area-scan cameras, 'area_scan_division'"area_scan_division""area_scan_division""area_scan_division""area_scan_division""area_scan_division" and 'area_scan_polynomial'"area_scan_polynomial""area_scan_polynomial""area_scan_polynomial""area_scan_polynomial""area_scan_polynomial", the calibration is performed in four steps. First, the algorithm tries to build a chain of observation poses, which connects all cameras and calibration object poses to the reference camera, e.g.:

        obs[0,0,0]            obs[1,0,0]     obs[1,0,1]           obs[2,0,1]      obs[2,..]
  camera[0] -> calib_obj_pose[0,0] <- camera[1] -> calib_obj_pose[0,1] <- camera[2] -> ...
(ref_camera)

If there is a camera that cannot be reached (i.e. it is not observing any calibration object pose that can be connected in the chain), the calibration process is terminated with an error. Otherwise, the algorithm initializes all calibration items' poses by going down this chain. In the second step, calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras performs the actual optimization for all optimization parameters, which where not explicitly excluded from calibration. Based on the so-far calibrated cameras, the algorithm corrects in the third step all observations that contain mark contour information (see find_calib_objectfind_calib_objectFindCalibObjectfind_calib_objectFindCalibObjectFindCalibObject). Then, the calibration setup is optimized anew for the corrections to take effect. If no contour information was available, this step is skipped. In the last step calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras computes the standard deviations and the covariances of the calibrated camera internal parameters.

For cameras of type 'area_scan_telecentric_division'"area_scan_telecentric_division""area_scan_telecentric_division""area_scan_telecentric_division""area_scan_telecentric_division""area_scan_telecentric_division", or 'area_scan_telecentric_polynomial'"area_scan_telecentric_polynomial""area_scan_telecentric_polynomial""area_scan_telecentric_polynomial""area_scan_telecentric_polynomial""area_scan_telecentric_polynomial" the same four steps are executed as for projective area-scan cameras. In the first step (building a chain of observation poses that connects all cameras and calibration objects) additional conditions must hold. Since the pose of an object can only be determined up to a translation along the optical axis, each calibration object must be observed by at least two cameras to determine its relative location. Otherwise, its pose is excluded from the calibration. Also, since a planar calibration object appears the same from two different observation angles, the relative pose of the cameras among each other cannot be determined unambiguously. There are always two valid alternative relative poses. Note that both alternatives result in a consistent camera setup which can be used for measuring. Since the ambiguity cannot be resolved, the first of the alternatives is returned. Note also that, if the returned pose is not the real pose but the alternative one, then this will result in a mirrored reconstruction. If the relative pose of the cameras can not be determined, the calibration will return an error. The system can be extended by additional calibration objects or cameras as long as all cameras are observing all calibration objects.

If any camera does not observe all of the calibration objects, an additional requirement must be considered. This may only happen for four or more cameras. For this purpose, the calibration is split up into several subsystems in which every calibration object is observed by every camera in the subsystem. Two subsystems are connected, if they overlap, i.e. if at least one camera of each subsystem is observing a calibration object in the other subsystem. If not all subsystems can be connected in a chain, the calibration will return an error.

For cameras of type 'line_scan'"line_scan""line_scan""line_scan""line_scan""line_scan" the operator internally calls camera_calibrationcamera_calibrationCameraCalibrationcamera_calibrationCameraCalibrationCameraCalibration. Therefore, some of the restrictions of camera_calibrationcamera_calibrationCameraCalibrationcamera_calibrationCameraCalibrationCameraCalibration are inherited as well: next to the already mentioned restrictions of only one camera and only one calibration object per setup, there is a further restriction that all observations need to contain the projection coordinates of all calibration marks of the calibration object. Furthermore, calibration with these camera types does not deliver information about standard deviations and covariances for the estimated optimization parameters.

Checking the success of the calibration

After a successful calibration, the root mean square error of the back projection error of the optimization is returned in ErrorErrorErrorErrorErrorerror (in pixel) and gives general indication whether the optimization was successful.

If only a single camera is calibrated, an ErrorErrorErrorErrorErrorerror in the order of 0.1 pixel (the typical detection error by extraction of the coordinates of the projected calibration markers) is an indication that the optimization fits the observation data well. If ErrorErrorErrorErrorErrorerror strongly differs from 0.1 pixel, the calibration did not perform well. Reasons for this might be, i.e., a poor quality, an insufficient number of calibration images, or an inaccurate calibration plate. In case that more than one camera is calibrated simultaneously, the value of ErrorErrorErrorErrorErrorerror is more difficult to judge. As a rule of thumb, ErrorErrorErrorErrorErrorerror should be as small as possible and at least smaller than 1.0, thus indicating that a subpixel precise evaluation of the data is possible with the calibrated parameters. This value might be difficult to reach in particular configurations. For further analysis of the quality of the calibration, refer to standard deviations and covariances of the estimated parameters (currently for projective area-scan cameras only, see get_calib_dataget_calib_dataGetCalibDataget_calib_dataGetCalibDataGetCalibData).

Getting the calibration results

The results of the calibration, i.e., internal camera parameters, camera poses, calibration objects poses etc., can be queried with get_calib_dataget_calib_dataGetCalibDataget_calib_dataGetCalibDataGetCalibData. The poses of telecentric cameras can only be determined up to a displacement along the z axis of the coordinate system of the respective camera. Therefore, all camera poses are moved along these axes until they all lie on a common sphere. The center of the sphere is defined by the pose of the first calibration object.

Used 3D camera model

In general, camera calibration means the exact determination of the parameters that model the (optical) projection of any 3D world point P(w) into a (sub-)pixel [r,c] in the image. The projection consists of multiple steps: First, the point p(w) is transformed from world into camera coordinates (points as homogeneous vectors, compare affine_trans_point_3daffine_trans_point_3dAffineTransPoint3daffine_trans_point_3dAffineTransPoint3dAffineTransPoint3d):

  /      \     / x \     /        \   /      \
  | p(c) |  =  | y |  =  |  R   t | * | p(w) |
  |      |     | z |     |        |   |      |
  \  1   /     \ 1 /     \ 0 0  1 /   \  1   /

Then, the point is projected into the image plane, i.e., onto the sensor chip.

For the modeling of this projection process that is determined by the used combination of camera, lens, and frame grabber, HALCON provides the following three 3D camera models:

For area-scan cameras, the projection of the point p(c) that is given in camera coordinates into a (sub-)pixel [r,c] in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor chip. If the underlying camera model is an area-scan pinhole camera, i.e., if the focal length is greater than 0, the projection is described by the following equations:

           / x \
    p(c) = | y |
           \ z /

    u = Focus * x / z
    v = Focus * y / z

In contrast, if the focal length is 0, the camera model of an area-scan telecentric camera is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the corresponding equations are:

           / x \
    p(c) = | y |
           \ z /

    u = x
    v = y

For both types of area-scan cameras, the lens distortions can be modeled either by the division model or by the polynomial model. The division model uses one parameter (Kappa) to model the radial distortions.

The following equations transform the distorted image plane coordinates into undistorted image plane coordinates if the division model is used:

   u = u' / (1+Kappa*(u'^2+v'^2)
   v = v' / (1+Kappa*(u'^2+v'^2)

These equations can be inverted analytically, which leads to the following equations that transform undistorted coordinates into distorted coordinates if the division model is used:

   u' = (2*u) / (1+sqrt(1-4*Kappa*(u^2+v^2)))
   v' = (2*v) / (1+sqrt(1-4*Kappa*(u^2+v^2)))

The polynomial model uses three parameters (K1, K2, K3) to model the radial distortions, and two parameters (P1, P2) to model the decentering distortions. The following equations transform the distorted image plane coordinates into undistorted image plane coordinates if the polynomial model is used:

  u = u' + u'*(K1*d^2 + K2*d^4 + K3*d^6) +
            2*P1*u'*v' + P2*(d^2 + 2*u'^2)


   v = v' + v'*(K1*d^2 + K2*d^4 + K3*d^6) +
            P1*(d^2 + 2*v'^2) + 2*P2*u'*v'

   d = sqrt(u'^2+v'^2)

These equations cannot be inverted analytically. Therefore, distorted image plane coordinates must be calculated from undistorted image plane coordinates numerically.

Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e., the pixel coordinate system:

    r = v' / Sy + Cy
    c = u' / Sx + Cx    

For line-scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON, the following assumptions for this motion are made:

  1. the camera moves with constant velocity along a straight line

  2. the orientation of the camera is constant

  3. the motion is equal for all images

The motion is described by the motion vector V = (Vx,Vy,Vz)' that must be given in [meter/scanline] in the camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact, this is equivalent to the assumption of a fixed camera with the object traveling along -V.

The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a right-handed coordinate system.

As the camera moves over the object during the image acquisition, also the camera coordinate system moves relatively to the object, i.e., each image line has been imaged from a different position. This means there would be an individual pose for each image line. To make things easier, in HALCON all transformations from world coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion V is taken into account during the projection of the point p(c) into the image. Consequently, only the pose of the first image line is computed by the operator find_calib_objectfind_calib_objectFindCalibObjectfind_calib_objectFindCalibObjectFindCalibObject (and stored by calibrate_camerascalibrate_camerasCalibrateCamerascalibrate_camerasCalibrateCamerasCalibrateCameras in the calibration results).

For line-scan pinhole cameras, the projection of the point p(c) that is given in the camera coordinate system into a (sub-)pixel [r,c] in the image is defined as follows:

Assuming

           / x \
    p(c) = | y |,
           \ z /

the following set of equations must be solved for m, u', and t:

    m * D * u' = x - t * Vx
   -m * D * pv = y - t * Vy
    m * Focus  = z - t * Vz

with

                   1
    D  = -----------------------
         1 + Kappa*(u'*u' + pv*pv)

    pv = Sy*Cy

This already includes the compensation for radial distortions. Note that for line scan cameras, only the division model for radial distortions can be used.

Finally, the point is transformed into the image coordinate system, i.e., the pixel coordinate system:

    c = u' / Sx + Cx
    r = t

Attention

A camera calibration data model CalibDataID cannot be shared between two or more user's threads. Different camera calibration data models can be used independently and safely in different threads.

Parallelization

Parameters

CalibDataIDCalibDataIDCalibDataIDCalibDataIDCalibDataIDcalibDataID (input_control)  calib_data HCalibData, HTupleHTupleHCalibData, HTupleHCalibDataX, VARIANTHtuple (integer) (IntPtr) (Hlong) (Hlong) (Hlong) (Hlong)

Handle of a calibration data model.

ErrorErrorErrorErrorErrorerror (output_control)  number HTupleHTupleHTupleVARIANTHtuple (real) (double) (double) (double) (double) (double)

Root mean square error of the back projection of the optimization.

Possible Predecessors

create_calib_datacreate_calib_dataCreateCalibDatacreate_calib_dataCreateCalibDataCreateCalibData, set_calib_data_cam_paramset_calib_data_cam_paramSetCalibDataCamParamset_calib_data_cam_paramSetCalibDataCamParamSetCalibDataCamParam, set_calib_data_calib_objectset_calib_data_calib_objectSetCalibDataCalibObjectset_calib_data_calib_objectSetCalibDataCalibObjectSetCalibDataCalibObject, set_calib_data_observ_pointsset_calib_data_observ_pointsSetCalibDataObservPointsset_calib_data_observ_pointsSetCalibDataObservPointsSetCalibDataObservPoints, find_calib_objectfind_calib_objectFindCalibObjectfind_calib_objectFindCalibObjectFindCalibObject, set_calib_dataset_calib_dataSetCalibDataset_calib_dataSetCalibDataSetCalibData

Possible Successors

get_calib_dataget_calib_dataGetCalibDataget_calib_dataGetCalibDataGetCalibData

References

J. Heikillä: “Geometric Camera Calibration Using Circular Control Points”; PAMI-22, no. 6; pp. 1066-1077; 2000.

Module

Calibration


ClassesClassesClassesClasses | | | | Operators