Vision-guided Robot Heads for Space – NASANASA (Washington, DC), General Motors (Detroit, MI) and Oceaneering Space Systems (Houston, TX) are developing a humanoid robot, known as R2. Although the robot has the appearance and proportions of an astronaut, R2 has no lower torso, its body being fixed to a stationary rack. Designed to assist astronauts during extra vehicular activities (EVAs), the robot combines a number of tactile, force, position, range finding and vision sensors to allow it to perform functions such as object recognition and manipulation.
In the design of the R2 robot, a 3D time of flight imager will be used in conjunction with a stereo camera pair to provide depth information and visible stereo images to the system. While a Swiss Ranger SR4000 time-of-flight camera from MESA Imaging (Zurich, Switzerland) will generate the 3D positional information, two Prosilica GC2450 GigEVision cameras from Allied Vision (Stadtroda, Germany) will capture color stereo images. HALCON 9.0 image processing software from MVTec Software (Munich, Germany) will be used to integrate the various sensor data types in a single development environment.
To achieve robust, automatic recognition and pose estimation of objects, complex patterns from the stereo and TOF sensors will be analyzed. The ToF sensor separates the background from the objects in front of the robot, so that the background pixels in the camera images can be ignored. Since the system uses a single laptop computer connected to the robot to control the robot's custom-built tactile force and position sensors and perform object recognition, processing power is limited. Because of this, regions of interest (ROIs) within the images will first be segmented based on color, pixel intensity or texture and pattern recognition techniques applied. "Searching for a pattern in a small ROI is much faster than searching for a pattern in a large image. Simple pattern recognition will be used to find the ROI, while complex pattern recognition will be used inside the ROI," says Hargrave.
To compute the pose of these lid fasteners, the location of the fastener components in each stereo image must be identified. HALCON's built-in classification techniques will be used with stereo-pair calibration and TOF positional information to perform pattern recognition functions. In this way, the system will generate a feasible trajectory for the robot to fold the lid to open the box.
Author: Dr. Lutz Kreutzer, MVTec Software GmbH
Article kindly provided by Vision Systems Design.