Machine Vision

1 + 1 = 3D - Stereoscopic measurements

Camera based 3D measurements

14.04.2014 -

By understanding the plane of vision, 3D vision systems have enabled significant gains in accuracy, here we look at how to specify a 3D measurement system, with the trade-offs in system specifications

The increase in measurement accuracy that has been witnessed from machine vision systems used in industrial systems is little short of astounding.
This has come not just from an increase in processing speed and a significant improvement in sensor pixel count, but from advances in camera module design, allowing them to better exploit new sensor technologies, and software algorithms that can not only measure sub pixel accuracy, but also cope with 3D information.
If we look back to camera measurement systems from little more than a decade ago we'd see a stark contrast. Camera technology permitted just 640x480 images to be created, introducing significant errors given the fact that all measurements were made using pixel counts.
And, even if we factor in the move to sub pixel measurements - using well-calibrated systems that delivered improvements greater than one order of magnitude - systems still relied on the false assumption that all items were in the same plane.
3D systems changed this. Indeed, the technology can now enable measurement accuracies that are greater than 99 per cent.

3D camera systems

Camera based 3D measurements can be made in one of two ways: single-camera or multi-camera systems. Both then take the output from the camera/s and pass the images to a multicore PC running complex algorithms to provide measurement data.
Multi-camera systems use two or more cameras positioned a known, fixed distance / angle from one another. The output of each is compared using increasingly powerful software algorithms.
The alternative - as used in the original Microsoft Kinect body tracking motion controller for the Xbox 360 and Sony's PSEye for its Playstation 3 - uses a single-camera combined with a projector. In this case, a known pattern of structured light (infra-red or coloured) is projected onto an object. The extent of distortion (a flat surface delivers no distortion) is then analysed and the shape determined via software algorithms.

Improving accuracy

There will always a trade-off between accuracy and budget and this balance will depend on the project.
If all you need is a basic accuracy then it is possible to guide a picking robot arm using the output from two off the shelf web cams, running open-source software algorithms through a cheap PC or even a dual-core mobile processor.
The precision would be compromised and consumer equipment would fail far quicker than industrial grade equipment, but it wouldn't only be the pixel count that limited this precision.
From a hardware point of view, it's obviously true that an increased pixel count will lead to a greater level of precision. But multi-camera 3D (and especially for multi-camera systems) algorithms assume the two photos being compared are taken from known positions and, crucially, at the same time.
This means perfect synchronisation of the cameras and, ideally, the same camera being used to take each image.
By using industrial grade equipment with components that are designed to last for several years, even when used in extreme environments - such as our XCG-575 5-megapixel GigE camera, or a cubic format module for space constrained applications - it's possible to reliably compare and match mechanical specifications, such as CCD sensor alignments, trigger speeds, exposure time, image quality and operating temperatures to deliver a very stable exposure.
From a software point of view, speed is essential. This means accuracy can be improved using either a more powerful PC and / or less bloated software.

What is possible?

If we look at some of the leading systems integrators we can see what's currently possible. Last month Tordivel announced its latest Scorpion 3D Stinger camera for industrial machine vision.
Tordivel states it uses the highest levels of componentry to create one of the fastest systems on the market, with even fast moving objects to be analysed and picked by an ABB Flexpicker robot in real time.
The state-of-the art Scorpion 3D Stinger technology includes two XCG-5000E 5MP GigE cameras from SONY, a high power white or IR strobed LED. An IR-830nm and red-630nm random pattern projection laser from Osela, the Scorpion Vision Software and an industrial PC running a powerful hexa-core Intel CPU.
The technology is being used in a wide range of applications - from food manufacturing to automotive assembly processes.
By using matching, precisely manufactured camera hardware, the most efficient and accurate software and a powerful PC the system is able to undertake real-time selection and location of moving objects, calculating the size of each component to an accuracy of 1mm in all three dimensions, from a picking area of 1200x800x500 mm - or a 99.9% accuracy.

Conclusion

The increase in measurement accuracy that has been witnessed from machine vision systems used in industrial systems is little short of astounding.
Camera technology will continue to evolve but the key underlying principles will remain the same: use the best cameras for your budget, ideally use the same cameras, and look at the complete range of mechanical specifications, not just pixel count; use software that is efficient and run this on a fast PC.

Indeed, by improving the system's reliability to ensure images are stable and taken with split second timing precision, you can deliver 3D sub-pixel measurements - allowing you improve accuracy, or use cameras with a lower pixel count (and therefore cost).

Contact

Sony Europe B.V., Zweigniederlassung Deutschland

Kemperplatz 1
10785 Berlin
Germany

+49 30 419 551 000
+49 30 419 552 000

Top Feature

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier

Top Feature

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier