
A perfect match!
The search for a perfect match isn’t unique to dating platforms. It also applies to machine vision, where users are looking for perfect “counterparts” as well. Matching technologies in machine vision software ensure that objects in the image are precisely located. So instead of a person seeking a relationship, software is seeking an object. This has many benefits in industrial process chains. Robots can handle components safely and don’t reach into empty space. Workpieces can be precision-machined. Quality control doesn’t overlook any production errors. Andreas Hofhauser, Product Owner HALCON Library at MVTec, sheds light on why matching is one of the most fundamental machine vision technologies and what it takes to find the perfect match.

In nearly all machine vision applications, recognizing objects is one of the first steps. After the image has been acquired, the key question is usually: Where exactly is the relevant object to be found in the image? For example, if a robot is to grip an object safely, it has to know its exact position. This knowledge is also crucial for many other tasks in production and assembly. When manufacturing semiconductors, for example, the position of all microcomponents has to be precisely located. Otherwise, they can’t be optimally processed. For filling and control processes in the beverage industry to run seamlessly, the exact location of bottles and their labels must be known. These are just a few examples.
Matching should be fast, robust, and precise
The machine vision software now has the task of precisely extracting relevant information from the image data. The magic word here is matching technologies. Three main parameters are important for a successful solution: the speed of execution, the robustness of the method, and the accuracy of the result.
What does this mean exactly? When it comes to speed, the system often has only a few milliseconds to process an image. In the case of robustness, objects also need to be detected in very noisy images, in which even human vision reaches its limits. Finally, some applications have to achieve a localization accuracy of 1/20 pixels and a repeatability of 1/100 pixels (meaning the repeated accuracy of hits during the pixel-precise detection of defects).
If the performance requirements have been met, matching technologies can optimize a wide range of machine vision applications. Robots, for example, can receive the precise fine-tuning needed to securely grasp objects. At the same time, quality control answers the question whether the component meets the exact requirements. In production workflows, matching allows workpieces to be accurately aligned for further processing steps on machines. Objects to be measured can also be precisely pinpointed. Finally, matching functions locate labels on products in order to read information in the correct position using optical character recognition (OCR). As this wide range of applications shows, matching technologies are used in the production of almost every product in our daily lives.


What specifically are the different matching methods?
To locate the relevant part of an image unerringly, the matching process comprises multiple specific methods that differ from each other in fundamental ways. In correlation-based matching, the gray values, or grayscale levels, of the individual pixels are compared with each other. In other words, a correlation is drawn between the object’s gray values and those of the image content. This method is a particularly good choice if the images are blurry or the object lacks distinct edges. However, clearly distinguishing between the different gray values requires powerful lighting. Another method is descriptor-based matching. This method originates from academic research, where it’s common practice. A discriminant texture and prominent feature points are needed for accurate results. Because many objects in the production environment don’t meet these requirements, the technology plays only a minor role in an industrial setting.
The jewel in the crown: shape-based matching

The most important method is shape-based matching, which can be used to locate a wide range of objects precisely and robustly in real time. What’s special about this method is that the technology delivers outstanding, subpixel-precise results even if objects are rotated, scaled, or partially covered. This is true even if the objects are outside the image or if the lighting fluctuates.
Matching also works in 3D. Perspective matching, for example, is an advancement of shape-based matching that allows the position of flat object parts of any shape to be determined in 3D space.
This method is particularly suitable for components that are tilted, creating a deformation in perspective. Shape-based matching also makes it possible to reliably determine the position and orientation of randomly arranged 3D objects in three-dimensional space based on their CAD model.
MVTec is recognized as an international technology leader for the speed, robustness and accuracy of its matching methods. The HALCON and MERLIC software products offer cutting-edge matching features. These innovations include Deep 3D Matching. Here, a deep learning network is extensively trained based on a CAD model of the particular object. This meth[1]od yields extremely robust results and raises the performance of industrial bin picking and pick-and-place applications to a new level. In shape-based matching, HALCON’s extended parameter estimation is a brand-new development that enables the automatic estimation of a large number of parameters for relevant matching applications, eliminating the need for generating them manually. Thus, even newcomers to the field of machine vision can carry out complex parametrization work that would otherwise require expert knowledge.
All matching technologies share a common trait: They significantly drive the advancement of industrial automation in various sectors.
Their use generally results in cost and effort savings as well as improved production quality. Thus, the “perfect match” is one of the key aspects of modern machine vision systems and will remain so in the future.