The Need for Speed
chip shooter component mounting systems place parts at the rate of more than 30 per second
A relentless focus on efficiency calls for improvements in response times in many applications, ranging from manufacturing to transportation control. In order to keep pace with demands for improvements in productivity and quality, industrial vision systems need to accelerate. Robotic manufacturing systems are now capable of forming and assembling complex products at extremely high speed. In the field of PCB assembly, for example, a ‘chip shooter’ component mounting systems can place parts at the rate of more than 30 per second.
Placement accuracy is a concern with electronic-component chip shooters, for example, demanding careful inspection to ensure that PCBs with incorrectly mounted parts do not make it to final assembly. Furthermore, it is important to spot issues with mounting accuracy as quickly as possible to avoid unnecessary wastage caused by other components being mounted on a board that is already outside of the tolerance. The use of high-accuracy machine vision systems in a line, or full board inspection systems between assembly stages is essential, especially in situations where, high pin-count or high-cost components are involved.
The result therefore is a demand for high frame-rate cameras in machine vision systems that support a communication interface capable of high data transfer rates to ensure that full resolution images can be sent continuously to the image-processing subsystems.
Lighting is a key factor in the effectiveness of industrial image processing. The illumination of the area of interest needs to be high enough to ensure that the camera’s exposure time can be as short as possible. The direction of the illumination is equally important to ensure there is high contrast between the features that need to be recognised and their background. And illumination uniformity is equally important to reduce the amount of post-processing needed to ensure that the image-processing software can detect each mounted component or product feature.
The Shadow of Larger Components
However, the ideal illumination conditions can often be difficult to achieve. Some parts of the product or sub-assembly to be captured may lie in the shadow of larger components. The high levels of lighting needed to capture images of other parts effectively - because they have low contrast against the substrate - can lead to glare in other parts of the image.
Image processing in the back-end computer system can deal to some extent with these problems. But accuracy can be sacrificed in the compromises needed to ensure that all the board can be captured in one image. Using the ability of CMOS imagers to deliver high frame rates, it is possible to overcome the problems of illumination consistency across the board.
In recreational photography, a now common technique is the use of Wide Dynamic Range (WDR) capture in which multiple shots are taken in sequence almost instantly, each with a different exposure time. The resulting images are then combined computationally to produce a composite photograph that provides much higher bit depth than a single image. The high dynamic range allows the use of shading correction on parts of the image to make them easier to recognise without the loss of effective bit depth that will be encountered with traditional single-exposure images.
Increase overall image quality
A further advantage of the use of multiple captures to produce a composite image is an increase in overall image quality. Effects such as heat haze can cause different parts of each subsequent image to be slightly blurred, making it harder for the image-processing software to spot potential production problems or to mistakenly fail a product and cause it to be sent for time-consuming re-inspection or, worse, to be scrapped expensively. Image averaging allows fast and effective removal of image defects, allowing the software to focus on actual production problems.
The use of WDR and image averaging (often referred to as multi-pixel averaging) provides key features that can be used in the increasingly important areas of intelligent transportation systems (ITS). Lighting quality is much more difficult to achieve outside of manufacturing lines. The low elevation of the sun in winter months leads to massive differences in lighting across the images used by number plate recognition systems on multilane highways. WDR makes it possible to read number plates even when they are heavily shaded by another vehicle alongside it.
Thanks to the use of high-sensitivity CMOS imaging technologies, imagers are now able to support the requirements of high throughput capture and the possibilities of multiple-image capture. In contrast to traditional imagers based on the charge-coupled device (CCD) architecture, CMOS imagers are able to leverage massive parallelism.
A long shift Register
Whereas a CCD imager needs to use a long shift register to read out the captured pixels in series, a factor that acts as a severe bottleneck on throughput, a CMOS imager can place an A/D converter at the end of each individual row of pixels. The output registers can therefore collect digitised image information from an entire column of converters simultaneously. The latest generation of Sony’s GS CMOS imagers can achieve frame rates as high as 150 frames per second at a resolution of 5Mpixel. The typical throughput available from a CCD of comparable resolution is an order of magnitude slower.
The high frame rate of a camera such as the 150fps XCL-SG510 allows both high-speed capture with ample headroom for the use of multiple-exposure WDR and defect-removal techniques. The use of a global shutter further enhances accuracy by eliminating the distortion caused by less optimised rolling-shutter architectures.
In high-speed production, the rolling-shutter architecture causes problems because each row of pixels is exposed and captured in series. The result is that each row is captured at a slightly different point in time, which causes fast-moving objects to be distorted as they pass the camera. A global shutter ensures that the exposure for all pixels within an image is at the same point in time.
Spatial accuracy in such high-speed systems is also crucial to reduce the probability of misrecognition and reduce the computational burden on image processing systems by eliminating the need to perform translation and rotation corrections on captured images. The GS CMOS imagers have been designed to offer extremely fine tolerances on their mounting points to ensure high spatial accuracy.
Synchronised in Time
Because of the high frame rates that are now possible with advanced CMOS imagers such as the Sony GS CMOS series, it is important that systems on the production line or used by other applications such as ITS are synchronised in time. A key technology for inter-system time synchronisation is the IEEE 1588 Precision Time Protocol (PTP). By synchronising systems on a network, such as gigabit ethernet, to a common clock, PTP ensures that an object in one particular frame can be accurately and reliably identified for removal or remedial processing by robotic systems downstream.
Gigabit Ethernet provides one means by which high-speed images can be delivered to image-processing computers, other options include dedicated interfaces such as CameraLink for the highest performance. Maximising the throughput on these interfaces also involves care in design, particularly where multi-camera systems are involved. The GS CMOS imagers employ techniques such as intelligent flow control to avoid packet collisions and so avoid bottlenecks forming as network conditions change.
As a result, a deep understanding of the many factors that limit speed in machine-vision systems has yielded a camera architecture that provides a wide variety of industries with the ability to accelerate their processes, and provide the efficiency gains to their customers.