Direct interfacing of smart 3D vision system with robots
The use of small to medium-sized collaborative robots for factory automation applications is growing at a rapid rate.
Many of these applications are pick-and-place, so the robots require machine vision to visualize the scene, process information to make control decisions and execute precision-based mechanical movements.
2D or 3D vision systems are used to provide these critical functions, but integrating the vision system and robot can be demanding. LMI Technologies have developed a plugin which allows direct interfacing of its Gocator 3D snapshot sensors with robots from Universal Robots (UR).
The 3D snapshot sensor can be mounted connected directly to the robot over Ethernet using the Gocator URCap plugin. The 3D coordinate system of the Gocator is mapped directly into the robots coordinate system resulting 3D vision-guided robotic system is simple and highly efficient. No additional software or PC is required.
The benefits of 3D vision
2D-driven systems can only locate parts on a flat plane relative to the robot. Robotic systems equipped with 3D vision, on the other hand, can identify parts randomly posed in three dimensions (i.e., X-Y-Z), and accurately discover each part’s 3D orientation. This is a key capability for effective robotic pick-and-place.
Gocator snapshot sensors combine fringe projection using blue-LED structured light with a rich array of built-in 3D measurement tools and decision-making logic to scan and inspect any part feature with stop/go motion at speeds up to 6 kHz.
The blue LED projects one or more high-contrast light patterns onto the object to allow full-field 3D point cloud acquisition by the stereo cameras, providing excellent ambient light immunity, even in challenging conditions. The combination of these metrology-grade snapshot sensors with UR devices gives a complete robotic solution that delivers high-performance 3D results in robot vision-guidance, quality control inspection, and automated assembly with smart pick and place.
Typical applications
The 3D vision-guided robots are generally equipped with a vacuum- or pneumatic-based grip that allows the robot to contact the part on a variety of surfaces and effectively transport the part while avoiding collisions to a target destination. Parts can be positioned systematically or randomly on moving conveyors, stacked bins, or pallets.
Typical applications include:
- Pick-and-place of incoming raw materials or subassemblies travelling on a transport system (e.g., conveyor, pallets). Gocator scans the target part/assembly, reports its position in global coordinates, and places it randomly or directly on a conveyor/pallet.
- Random placement and picking up off of a conveyor. Gocator scans a part as it travels down a conveyor, and directs the robot to pick up the part and place into the appropriate bin.
- Placing finished products/assemblies into structured bins according to the height of the parts (e.g., 1, 2, 3, 4 stacks high). The Gocator sensor uses 3D information to place the parts in the appropriate bin and set them at the appropriate clock angle.
All three of these applications can be performed using just the Bounding Box, Height and Part Matching tools from the 140+ that are provided. This simplicity, with no need for additional programming, makes it easy to set up, run, and get the desired results from the system. Factory pre-calibration ensures that the system is ready to scan and measure straight from the box.
www.stemmer-imaging.com