Machine Vision

Best Starting Point for Visual Intelligence

State-of-the-Art in Embedded Vision

30.05.2022 - While the topic of embedded vision is not new, new opportunities are emerging with hardware developments. The following article provides an overview.

Embedded vision products are small, modular, platform independent, powerful and have low power consumption despite good performance. These are exactly the characteristics that are desired for most vision applications. But the question arises why these products with this selection did not exist earlier. The answer is simple: these types of products have been around for a long time, just not with this level of performance. After all, the goal of embedded vision systems is usually to integrate the complete image processing system, including hardware and software, into the customer's housing. This means that the camera is located relatively close to the computer platform and the application-specific hardware is embedded in or connected to the system. Additional components such as lighting, sensors or digital I/O are integrated or provided.

Vision-in-a-Box approach suitable for many applications

A suitable example from Matrix Vision is an application for a UK traffic monitoring company for license plate recognition. The application has two remote sensor heads with different image sensors, one for daytime and one for nighttime use. The LED illumination is located on a separate board. Everything is connected to a PowerPC-based board and to an FPGA-equipped board. An image is triggered via the integrated digital I/Os, the FPGA extracts the license plate number from the complete image and sends the AOI via network to a cloud server, which uses OCR to determine the characters. Anyone who would now like to object that PowerPC is old news after all can agree.

The example mentioned is already more than 13 years old and is based on a smart camera that has been customized. And as the example shows, smart cameras or intelligent cameras in the broader sense are vision-in-a-box systems, proving that embedded vision has been around for a long time. While the performance of smart cameras limited their applications, where smart cameras could be used, they are still doing their job reliably today.
License plate recognition is also an example of edge computing. Edge computing, as we know, is the approach of decentralizing data processing, i.e., performing the initial preprocessing of the aggregated data at the edge, so to speak, to the network and then performing the further aggregation in the (cloud) server. In the Internet of Things (IoT), embedded vision represents the image processing solution for edge computing. It is thus clear that even with Industry 4.0, the requirement profile of embedded vision has not changed much.

Increased power requirements due to higher resolutions and frame rates

What has changed, on the other hand, is the power demand of potential applications due to higher resolutions and frame rates, which is also driven by the new image sensor generations. Here, machine vision, as a "free rider" of technology - and embedded vision in particular - is benefiting from the rapid developments in the smartphone market. It is thanks to this market that the importance of ARM processor architectures is steadily increasing and corresponding system-on-a-chip (SoC), for example from NVIDIA, Broadcom (as the basis of the Raspberry Pi) or NXP, as well as system-on-a-modules (SoM) such as SMARC or COM Express, now play the main role as powerful computing units with low power consumption. Via carrier boards, the system-on-units become executable and can serve as a mainboard for vision-in-a-box. Through cooperation with the customer, carrier boards can also be further integrated into the customer's application through individualization - another aspect of how embedded can be interpreted.
How does the data get from the image sensor to the computing unit?

A 12.4 MP fourth-generation Pregius image sensor from Sony at 10 bit (ADC) and 175 frames per second at 8 bit results in a data volume of 2,170 MB/s. Many interfaces are no longer able to cope with this. Many interfaces have to give up their sails. Matrix Vision has therefore opted for PCI Express (PCIe). PCIe is very well standardized across platforms and, above all, scalable. Depending on lanes and design, net bandwidths of 3,200 MB/s are possible, with image data being written directly to memory with virtually no latency. Thus, PCIe can meet almost any application requirement.

With the launch of the PCIe-based embedded vision product series, Matrix Vision has put together a construction kit centered around the high-end mvBlueNaos embedded camera series. The camera can be flexibly assembled from different components such as lens, filter, lens holder, sensor, housing, etc. according to the application. For connection to a carrier board, different adapters for PCIe and M2.M are available via the all-in-one interface Naos for Embedded (N4e). The cameras have low power consumption and support ARM SoCs and Intel SoMs such as SMARC and COM Express, with low CPU/GPU load. They enable multi-camera systems and support the AI units of the SoCs. In the image acquisition programming interface "mvIMPACT Acquire SDK" PCIe is realized as GenICam GenTL producer and consumer. This allows customers to migrate existing solutions with little effort.

MIPI as an attractively priced solution

MIPI is also often mentioned in connection with embedded vision. MIPI in itself is also a kind of standard for ARM mobile processors, but rather requires a hardware driver for a specific SoC and a specific MIPI sensor (and thus a different one for each SoC and image sensor). The image sensors used in conjunction with MIPI tend to be low-end and placed more at USB3 in terms of frame rate. Provided the performance is sufficient, MIPI can be an attractively priced solution.

The last piece of the mosaic that is still missing for embedded vision is a standard that corresponds to GigE Vision and USB3 Vision. The European Machine Vision Association (EMVA) has set the course for this with its standardization initiative at the end of 2019. The starting position for embedded vision could not be better.

Author
Ulli Lansche, Technical Editor

Contact

Balluff MV GmbH

Talstr. 16
71750 Oppenweiler
Germany

+49 7191 94 32 0
+49 7191 94 32 288

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier