Perfectly positioned plastic bags – with classic machine vision and deep learning

Robotics | Deep Learning

Consistently automated pick-and-place activities require the reliable gripping of differently shaped and translucent objects. TEKVISA has implemented a sophisticated application that enables reliable gripping even with complex surfaces with the help of the MVTec HALCON machine vision software, relying on HALCON’s integrated deep learning methods.

Plastic bags with assembly accessories can have many different shapes. Particularly in industrial processes, this often leads to difficulties in the precise identification and automated gripping of such bags. One consequence of this is a drop in productivity, for example in pick-and-place activities. A solution from TEKVISA uses machine vision software to reliably and consistently automate and speed up such processes.

The Spanish company TEKVISA ENGINEERING specializes in digital inspection systems for quality control and process automation in industrial environments. Since its foundation, TEKVISA has pursued the goal of developing particularly user-friendly and advanced systems for a wide range of sectors, such as the automotive and food industries. In addition to deep learning-based inspection solutions, TEKVISA also develops sophisticated robotics and bin-picking applications.

Precise identification and positioning of accessory bags

The automation specialist has developed a robot-assisted picking system for a leading manufacturer of wall panels for offices. The system is based on machine vision with deep learning algorithms and precisely detects plastic bags with accessories for the wall panels so that robots can grip them securely. The goal was to fully automate what had previously been a purely manual and time-consuming task. In addition, employees were to be relieved and be available for more demanding tasks.

When developing a suitable automation solution, it was also important to bear in mind that the bags contain a large number of different accessories, which in turn leads to variations in size, weight, and appearance. In addition, they are randomly shaped and, due to their elasticity, may also be compressed, pulled apart, or deformed in some other way.

Major challenge due to product variance

This high product variance presented the engineers at TEKVISA with major challenges. They needed to develop a flexible solution based on machine vision that would reliably detect all conceivable variants of accessory bags and enable reliable gripping processes. The system had to identify the bags that could best be picked up by the robot arm based on their position and orientation. It was clear from the outset that the solution would have to consist of a combination of classic machine vision methods and newer technologies such as deep learning.

The setup consists of a high-resolution colour area scan camera and special lighting that minimizes reflections and enables precise detection of the respective bag contents. At the heart of the application is an innovative machine vision system that accurately identifies the bags lying on a conveyor belt so that a robot can pick them up precisely. The robot then places them with high precision on the wall panel to be packaged shortly before the final packaging process.

Classic machine vision and deep learning in combination

The machine vision solution selects the optimal candidates for picking from the numerous shapes and positions of the bags. The machine vision software MVTec HALCON, which has a library of over 2,100 operators containing the most modern and powerful methods and deep learning applications, is used for this. The deep learning method "Object Detection" in particular met TEKVISA's requirements. Using the deep learning algorithms it contains, the system is first comprehensively trained with sample images. In this way, the software learns which numerous different characteristics the bags can have. This leads to a very robust detection rate. The bags not selected for gripping are sorted out and then fed back into the system. By repositioning them, they then take up a more favorable position on the conveyor belt so that the robot can pick them up better and place them for dispatch. In this way, overlapping and stacked bags can also be gripped and picked. The system is able to analyze and precisely identify up to 60 bags per minute using the integrated machine vision software.

Perfect harmony between camera and robot thanks to hand-eye calibration

In addition to deep learning technologies, classic machine vision methods, which are also an integral part of MVTec HALCON, are responsible for the robust detection rates. An important feature here is hand-eye calibration. This is required in advance so that the robot can precisely grip and place the bags observed by a stationary 2D camera during operation. During hand-eye calibration, a calibration plate is attached to the robot's gripper arm and brought into the camera's field of vision. Several images with different positions of the robot are then taken and offset against the robot's axis positions. The result is a "common" coordinate system of camera and robot. This allows the robot to grip the components at the positions that were detected by the camera immediately beforehand. By determining the exact position of the object with an accuracy of 0.1 millimetres, a hit rate of 99.99% can be achieved during the gripping process.