Machine Vision

Take on Color

1-Chip CCD Cameras Compute Colors from Luminance Values

09.12.2009 -

What a colorful world we live in... Well, not really. CCD and CMOS chips are only able to capture luminance values, thus making them color-bind. How is then possible for us to watch television in color? The answer to this contradiction is simple: Television cameras use three chips. Each is fitted with a filter - one per color red, green and blue.

We expect television cameras to offer a high image quality but also to have a high price. If, however, the camera has to be small and/or available at a low price, it is not possible to use three chips. How is it therefore possible to obtain colors from one single color-blind chip? The following sections offer an answer to the question. To make things simple, we are going to concentrate on CCD cameras in the following. CMOS cameras process colors in a very similar manner.

Why Color-blind?
The pixel information on a CCD chip is comparable to a bin in which, during the exposure time, free electrons are collected (fig. 1). According to the photoelectric effect these free electrons are created by photons encountering the bin. At the end of the exposure time, the electrons drain off via a resistor, thus creating a voltage. An A/D converter transforms this voltage into a digital gray level. This value is „gray" as the photon‘s wavelength, and thus the color, is not transferred to the electron. The camera simply evaluates the number of electrons. It is proportional to the number of collected photons and thus the intensity of light.

Now Color Gets into the Game
By placing a green filter in front of the pixel (fig. 2) only green light creates electrons. At the end of the exposure time the electrons drain off via the resistor and yield, by means of the A/D converter, a digital signal - a gray level. The value itself only carries information about the intensity of light, not its color. However, it needs to ensure that this gray level is considered as green. Therefore, the information "gray level is originated by green light" has to be added.

Elegantly Solved
The process for green is also true for red and blue. At first glance, this additional color information demands that two more bits be stored. While working for Kodak in 1976, Bryce E. Bayer however had the idea to equip every second pixel of a CCD chip with a green filter and distribute blue and red filters equally to the remaining pixels (fig. 3). Because of this mosaic-like arrangement, a Bayer filter is also referred to as a mosaic filter. Consequently, as the mosaic has a regular structure, the additional information for the color is carried by the pixel‘s coordinates.

Computing the Color
Each pixel is now allocated to a color value. However, each pixel should ideally provide the three colors which would be only possible with a real three CCD chip. That's why the color information from the neighbor pixels is copied: The red pixel misses the blue and green values, which are located in the direct neighborhood (fig. 3). The main advantage of this method is simply speed: The quality is sufficient for moving scenes. For static scenes, however, the result is too grainy. Better results are achieved by using the average of neighboring values. This method requires more computing power and the averaging leads to smeared edges. Therefore, algorithms have been developed that do not simply average neighboring pixels, regardless of the consequences, they notice the presence of an edge and behave more delicately which again raises computing efforts.

Only for Looks
By interpolating the colors users don't get more information: Existing information is simply presented in a more common form. To get this, the transfer rate increases considerably and consistently, the camera‘s frame rate drops, while the disk space required to store the additional data increases, and this for empty information. From the measurement point of view the sensor (camera) manipulates even the captured signal (image) to make it look „nicer".
Figure 4 shows a way out of this problem: The camera outputs the raw color data. As a result, both the transmission and the archiving of the images become more efficient. Furthermore, those who work in a measurement context are able to work with the original data. The color interpolation is only activated when the images need to be visualized.
In some cases, it may also be beneficial to do the color interpolation directly within the camera. Consequently, modern industrial cameras can be toggled by software between raw data and interpolation.

More Information
If you would like to learn more, you can download a number of white papers from The Imaging Source‘s web site at www.theimagingsource.com. Additionally, you can try out the free „Bayer ­Demonstrator“ tool for ­Windows, which, based on a number of simple images, illustrates how color ­interpolation works.

Contact

The Imaging Source Europe GmbH

Sommerstrasse 36
28215 Bremen
Germany

+49 421 335 91 0
+49 421 335 91 80

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier

inspect award 2024

The voting for the inspect award 2024 is open.

Vote now!

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier

inspect award 2024

The voting for the inspect award 2024 is open.

Vote now!