calibrate_cameras — Calibrate one ore more cameras.
The operator calibrate_cameras calibrates a setup of one or more cameras based on the calibration data model CalibDataID. The used calibration algorithm depends on the calibration setup type specified during the creation of the data model, see create_calib_data. Currently, only the type 'calibration_object' is supported.
In this setup, one or more cameras are calibrated, i.e. their internal parameters (e.g., their focal length) and their poses relative to the coordinate system of a so-called reference camera are calculated (see the paragraph "Used 3D camera model" for details of the used 3D point projection model). For this, one or more calibration objects (e.g., the HALCON calibration plate), are placed in front of the cameras. These objects have precisely known metric properties. The calibration objects are observed by the cameras in different calibration object poses, i.e., the cameras acquire an image of the calibration objects for each of their poses and extract metric information. Note that only cameras of the same type can be calibrated in a single setup. Furthermore line-scan cameras can be calibrated only one at a time.
The camera calibration corresponds to an optimization of the internal parameters and the poses of the cameras and of the calibration objects' poses such that the back projection of calibration object feature points into the modeled cameras fits the actual observed projections as well as possible. Note that the optimization needs an initial estimate for the internal camera parameters. In contrast, the initial poses of both cameras and calibration objects are not needed and remain at first undefined. Instead, for each observation a rough estimate of the pose of the calibration object relative to the observing camera is required. From these poses, the camera and calibration object poses are initialized in the first step of the camera calibration calibrate_cameras (see the section "Performing camera calibration" for more details).
Before calling calibrate_cameras, you must create and fill the calibration data model with the following steps:
Create a calibration data model with the operator create_calib_data, specifying the number of cameras in the setup and the number of used calibration objects.
Specify camera type and the initial internal camera parameters for all cameras with the operator set_calib_data_cam_param.
Specify the description of all calibration objects with the operator set_calib_data_calib_object.
Collect observation data with the operator set_calib_data_observ_points, i.e., the image coordinates of the extracted calibration marks of the calibration object and a roughly estimated pose of the calibration object relative to observing camera.
Configure the calibration process, e.g., specify the reference camera or exclude certain internal or external camera parameters from the optimization. With the operator set_calib_data, you can specify parameters for the complete setup, or configure parameters of individual cameras or calibration objects poses in the setup.
For example, if a certain camera parameter, for example the image sensor cell size, is known and only the rest of the parameters need to be calibrated, you call
Depending on the camera type being calibrated in the setup, calibrate_cameras performs the calibration in two different ways.
For projective area-scan cameras, 'area_scan_division' and 'area_scan_polynomial', the calibration is performed in four steps. First, the algorithm tries to build a chain of observation poses, which connects all cameras and calibration object poses to the reference camera, e.g.:
obs[0,0,0] obs[1,0,0] obs[1,0,1] obs[2,0,1] obs[2,..] camera -> calib_obj_pose[0,0] <- camera -> calib_obj_pose[0,1] <- camera -> ... (ref_camera)
If there is a camera that cannot be reached (i.e. it is not observing any calibration object pose that can be connected in the chain), the calibration process is terminated with an error. Otherwise, the algorithm initializes all calibration items' poses by going down this chain. In the second step, calibrate_cameras performs the actual optimization for all optimization parameters, which where not explicitly excluded from calibration. Based on the so-far calibrated cameras, the algorithm corrects in the third step all observations that contain mark contour information (see find_calib_object). Then, the calibration setup is optimized anew for the corrections to take effect. If no contour information was available, this step is skipped. In the last step calibrate_cameras computes the standard deviations and the covariances of the calibrated camera internal parameters.
For cameras of type 'area_scan_telecentric_division', or 'area_scan_telecentric_polynomial' the same four steps are executed as for projective area-scan cameras. In the first step (building a chain of observation poses that connects all cameras and calibration objects) additional conditions must hold. Since the pose of an object can only be determined up to a translation along the optical axis, each calibration object must be observed by at least two cameras to determine its relative location. Otherwise, its pose is excluded from the calibration. Also, since a planar calibration object appears the same from two different observation angles, the relative pose of the cameras among each other cannot be determined unambiguously. There are always two valid alternative relative poses. Note that both alternatives result in a consistent camera setup which can be used for measuring. Since the ambiguity cannot be resolved, the first of the alternatives is returned. Note also that, if the returned pose is not the real pose but the alternative one, then this will result in a mirrored reconstruction. If the relative pose of the cameras can not be determined, the calibration will return an error. The system can be extended by additional calibration objects or cameras as long as all cameras are observing all calibration objects.
If any camera does not observe all of the calibration objects, an additional requirement must be considered. This may only happen for four or more cameras. For this purpose, the calibration is split up into several subsystems in which every calibration object is observed by every camera in the subsystem. Two subsystems are connected, if they overlap, i.e. if at least one camera of each subsystem is observing a calibration object in the other subsystem. If not all subsystems can be connected in a chain, the calibration will return an error.
For cameras of type 'line_scan' the operator internally calls camera_calibration. Therefore, some of the restrictions of camera_calibration are inherited as well: next to the already mentioned restrictions of only one camera and only one calibration object per setup, there is a further restriction that all observations need to contain the projection coordinates of all calibration marks of the calibration object. Furthermore, calibration with these camera types does not deliver information about standard deviations and covariances for the estimated optimization parameters.
After a successful calibration, the root mean square error of the back projection error of the optimization is returned in Error (in pixel) and gives general indication whether the optimization was successful.
If only a single camera is calibrated, an Error in the order of 0.1 pixel (the typical detection error by extraction of the coordinates of the projected calibration markers) is an indication that the optimization fits the observation data well. If Error strongly differs from 0.1 pixel, the calibration did not perform well. Reasons for this might be, i.e., a poor quality, an insufficient number of calibration images, or an inaccurate calibration plate. In case that more than one camera is calibrated simultaneously, the value of Error is more difficult to judge. As a rule of thumb, Error should be as small as possible and at least smaller than 1.0, thus indicating that a subpixel precise evaluation of the data is possible with the calibrated parameters. This value might be difficult to reach in particular configurations. For further analysis of the quality of the calibration, refer to standard deviations and covariances of the estimated parameters (currently for projective area-scan cameras only, see get_calib_data).
The results of the calibration, i.e., internal camera parameters, camera poses, calibration objects poses etc., can be queried with get_calib_data. The poses of telecentric cameras can only be determined up to a displacement along the z axis of the coordinate system of the respective camera. Therefore, all camera poses are moved along these axes until they all lie on a common sphere. The center of the sphere is defined by the pose of the first calibration object.
In general, camera calibration means the exact determination of the parameters that model the (optical) projection of any 3D world point P(w) into a (sub-)pixel [r,c] in the image. The projection consists of multiple steps: First, the point p(w) is transformed from world into camera coordinates (points as homogeneous vectors, compare affine_trans_point_3d):
/ \ / x \ / \ / \ | p(c) | = | y | = | R t | * | p(w) | | | | z | | | | | \ 1 / \ 1 / \ 0 0 1 / \ 1 /
Then, the point is projected into the image plane, i.e., onto the sensor chip.
For the modeling of this projection process that is determined by the used combination of camera, lens, and frame grabber, HALCON provides the following three 3D camera models:
Area-scan pinhole camera:
The combination of an area scan camera with a lens that effects a perspective projection and that may show radial and decentering distortions.
Area-scan telecentric camera:
The combination of an area scan camera with a telecentric lens that effects a parallel projection and that may show radial and decentering distortions.
Line-scan pinhole camera:
The combination of a line scan camera with a lens that effects a perspective projection and that may show radial distortions.
For area-scan cameras, the projection of the point p(c) that is given in camera coordinates into a (sub-)pixel [r,c] in the image consists of the following steps: First, the point is projected into the image plane, i.e., onto the sensor chip. If the underlying camera model is an area-scan pinhole camera, i.e., if the focal length is greater than 0, the projection is described by the following equations:
/ x \ p(c) = | y | \ z / u = Focus * x / z v = Focus * y / z
In contrast, if the focal length is 0, the camera model of an area-scan telecentric camera is used, i.e., it is assumed that the optics of the lens of the camera performs a parallel projection. In this case, the corresponding equations are:
/ x \ p(c) = | y | \ z / u = x v = y
For both types of area-scan cameras, the lens distortions can be modeled either by the division model or by the polynomial model. The division model uses one parameter (Kappa) to model the radial distortions.
The following equations transform the distorted image plane coordinates into undistorted image plane coordinates if the division model is used:
u = u' / (1+Kappa*(u'^2+v'^2) v = v' / (1+Kappa*(u'^2+v'^2)
These equations can be inverted analytically, which leads to the following equations that transform undistorted coordinates into distorted coordinates if the division model is used:
u' = (2*u) / (1+sqrt(1-4*Kappa*(u^2+v^2))) v' = (2*v) / (1+sqrt(1-4*Kappa*(u^2+v^2)))
The polynomial model uses three parameters (K1, K2, K3) to model the radial distortions, and two parameters (P1, P2) to model the decentering distortions. The following equations transform the distorted image plane coordinates into undistorted image plane coordinates if the polynomial model is used:
u = u' + u'*(K1*d^2 + K2*d^4 + K3*d^6) + 2*P1*u'*v' + P2*(d^2 + 2*u'^2) v = v' + v'*(K1*d^2 + K2*d^4 + K3*d^6) + P1*(d^2 + 2*v'^2) + 2*P2*u'*v' d = sqrt(u'^2+v'^2)
These equations cannot be inverted analytically. Therefore, distorted image plane coordinates must be calculated from undistorted image plane coordinates numerically.
Finally, the point is transformed from the image plane coordinate system into the image coordinate system, i.e., the pixel coordinate system:
r = v' / Sy + Cy c = u' / Sx + Cx
For line-scan cameras, also the relative motion between the camera and the object must be modeled. In HALCON, the following assumptions for this motion are made:
the camera moves with constant velocity along a straight line
the orientation of the camera is constant
the motion is equal for all images
The motion is described by the motion vector V = (Vx,Vy,Vz)' that must be given in [meter/scanline] in the camera coordinate system. The motion vector describes the motion of the camera, assuming a fixed object. In fact, this is equivalent to the assumption of a fixed camera with the object traveling along -V.
The camera coordinate system of line scan cameras is defined as follows: The origin of the coordinate system is the center of projection. The z-axis is identical to the optical axis and directed so that the visible points have positive z coordinates. The y-axis is perpendicular to the sensor line and to the z-axis. It is directed so that the motion vector has a positive y-component. The x-axis is perpendicular to the y- and z-axis, so that the x-, y-, and z-axis form a right-handed coordinate system.
As the camera moves over the object during the image acquisition, also the camera coordinate system moves relatively to the object, i.e., each image line has been imaged from a different position. This means there would be an individual pose for each image line. To make things easier, in HALCON all transformations from world coordinates into camera coordinates and vice versa are based on the pose of the first image line only. The motion V is taken into account during the projection of the point p(c) into the image. Consequently, only the pose of the first image line is computed by the operator find_calib_object (and stored by calibrate_cameras in the calibration results).
For line-scan pinhole cameras, the projection of the point p(c) that is given in the camera coordinate system into a (sub-)pixel [r,c] in the image is defined as follows:
/ x \ p(c) = | y |, \ z /
the following set of equations must be solved for m, u', and t:
m * D * u' = x - t * Vx -m * D * pv = y - t * Vy m * Focus = z - t * Vz
1 D = ----------------------- 1 + Kappa*(u'*u' + pv*pv) pv = Sy*Cy
This already includes the compensation for radial distortions. Note that for line scan cameras, only the division model for radial distortions can be used.
Finally, the point is transformed into the image coordinate system, i.e., the pixel coordinate system:
c = u' / Sx + Cx r = t
A camera calibration data model CalibDataID cannot be shared between two or more user's threads. Different camera calibration data models can be used independently and safely in different threads.
Handle of a calibration data model.
Root mean square error of the back projection of the optimization.
create_calib_data, set_calib_data_cam_param, set_calib_data_calib_object, set_calib_data_observ_points, find_calib_object, set_calib_data
J. Heikillä: “Geometric Camera Calibration Using Circular Control Points”; PAMI-22, no. 6; pp. 1066-1077; 2000.