CMOS and CCD – Small Differences Along the Way from Light to a Signal

CMOS and CCD – Small Differences Along the Way from Light to a Signal

How everything began
Historically, CCD (Charged Coupled Device) sensors have existed much longer than CMOS sensors, that is to say, for more than 40 years. Due to constant improvement and optimization over the years, CCD sensors today stand for excellent image quality. In 2009, the American scientists Willard Boyle and George E. Smith were awarded the Nobel Prize for Physics for the invention of the CCD sensor. Originally developed in 1969 for the storage of data, the potential of the Charge Coupled Device as a light sensitive apparatus was soon realized. By 1975, the first sensors with a resolution sufficient for television cameras appeared. However, it took more than 10 years before the process technology was mature enough to begin production of CMOS (Complementary Metal Oxide Semiconductor) sensors. In the mid-nineties, the first commercially successful CMOS sensors appeared on the market.

The more sensitive the better
CMOS sensors are based on the same physical principles as CCD sensors. They convert incoming photons into electrons by means of a photo effect. As a result of their sensor structure, the maximum sensitivity of CMOS sensors is in the red spectral region (650 – 700 nm). CCD sensors, not least because of the numerous innovations during their longer technological history, have a maximum at about 550 nm - exactly where the human eye is most sensitive. For a variety of technical reasons, CMOS sensors in the past were considerably less efficient in converting the incoming light to an electrical signal. The photosensitive area within each pixel in a CMOS sensor occupied only a fractional part of the total pixel area. The rest of the pixel area was populated by the individual readout electronics associated with each photosensitive area. The structure of CCD sensors is different. In CCDs, the electronics for the evaluation of the charges collected by the sensor surface is located outside of the chip, so almost the entire chip surface is available for photosensitive structures.

Over the last few years, design improvements have increased the size of the light sensitive area of CMOS sensors to near the level of CCD sensors. One example of such an improvement is the micro-lens array that is now applied to the CMOS chip. The lens array collects the light impinging on each pixel area in the CCD sensor and focuses it on the available light sensitive region within the pixel.

The price of individuality
One set of electronics for all pixels – this phrase regarding processing capability is valid for CCD sensors and at first sight, sounds rather like a trade-off. But in fact, it is an advantage for image quality. Because there is one common electronic path for a large fraction, if not for all, of the pixels in a CCD chip, all analog pixel signals are evaluated and processed in the same way and they are all converted to digital signals in the same way.

CMOS chips carry individual processing electronics on board each pixel and are different in this respect. This characteristic means that they can be read out faster and that the image area can be accessed in more flexible ways. However, there are tiny variations within the individual electronic structures used to process each pixel, and this means that signal offset can differ from pixel to pixel within a CMOS sensor, although the amplification slopes are almost identical. Variations between the offset values of the pixels in a CMOS sensor are typically ten times larger than those of CCD sensors.

Taken together, this offset variation represents a difficulty with respect to the sensitivity threshold of the sensor. This is especially true when a weak signal that is slightly greater than the background noise must be detected. In this situation, a CMOS sensor looks worse than a CCD sensor. By definition, this threshold is reached when the signal from the sensor is as high as the noise (i.e., the signal-to-noise ratio or SNR equals one). A technical term that quantitatively describes this characteristic is known as the Fixed Pattern Noise (FPN). CMOS sensors exhibit a higher FPN than CCD sensors.

Less sensitivity, but lots of space for electrons CMOS sensors, however, do score much better in another area – they can provide a higher full well capacity. The full well capacity represents the maximum number of electrons that an individual pixel can hold. On CCD sensors, this number is often artificially limited to a reduced saturation capacity to avoid certain technical problems. The ratio of the saturation capacity (full well capacity, respectively) to the sensitivity threshold determines the sensor's dynamic range. In comparison to a CCD sensor, a CMOS sensor wins with regard to the saturation capacity what it loses when it comes to low-light sensitivity. As a result, CMOS and CCD sensors have almost the same level of dynamic range. Certain procedures can be used to modify the characteristic response curve of a sensor so that it matches what a human eye perceives. This is especially true for CMOS sensors, and therefore a logarithmic behavior with a dynamic range of more than 100 dB can be achieved. This modification differentiates dark regions within an image more precisely than bright regions and should be applied with care.

The saturation capacity is related to a second important parameter of an imaging device, the maximum signal-tonoise ratio. This parameter quantifies the ratio of a signal associated with light under optimum conditions to pure sensor noise without any light exposure. It can be shown, that in principle, the maximum signal-to-noise ratio equals the square root of the saturation capacity. Thus, the CMOS sensor excels with respect to the maximum signal-to-noise ratio, but it needs more light to do so.

As a simplified rule-of-thumb, one can say that CCD sensors are the preferred choice for applications with little light and CMOS sensors are a good alternative when there is a lot of light.

Share to:
Comments ( 0 )