How everything began
Historically, CCD (Charged Coupled Device) sensors have existed much longer than CMOS sensors, that is to say, for more than 40 years. Due to constant improvement and optimization over the years, CCD sensors today stand for excellent image quality. In 2009, the American scientists Willard Boyle and George E. Smith were awarded the Nobel Prize for Physics for the invention of the CCD sensor. Originally developed in 1969 for the storage of data, the potential of the Charge Coupled Device as a light sensitive apparatus was soon realized. By 1975, the first sensors with a resolution sufficient for television cameras appeared. However, it took more than 10 years before the process technology was mature enough to begin production of CMOS (Complementary Metal Oxide Semiconductor) sensors. In the mid-nineties, the first commercially successful CMOS sensors appeared on the market.
The more sensitive the better
CMOS sensors are based on the same physical principles as CCD sensors. They convert incoming photons into electrons by means of a photo effect. As a result of their sensor structure, the maximum sensitivity of CMOS sensors is in the red spectral region (650 – 700 nm). CCD sensors, not least because of the numerous innovations during their longer technological history, have a maximum at about 550 nm - exactly where the human eye is most sensitive. For a variety of technical reasons, CMOS sensors in the past were considerably less efficient in converting the incoming light to an electrical signal. The photosensitive area within each pixel in a CMOS sensor occupied only a fractional part of the total pixel area. The rest of the pixel area was populated by the individual readout electronics associated with each photosensitive area. The structure of CCD sensors is different. In CCDs, the electronics for the evaluation of the charges collected by the sensor surface is located outside of the chip, so almost the entire chip surface is available for photosensitive structures.
Over the last few years, design improvements have increased the size of the light sensitive area of CMOS sensors to near the level of CCD sensors. One example of such an improvement is the micro-lens array that is now applied to the CMOS chip. The lens array collects the light impinging on each pixel area in the CCD sensor and focuses it on the available light sensitive region within the pixel.
The price of individuality
CMOS chips carry individual processing electronics on board each pixel and are different in this respect. This characteristic means that they can be read out faster and that the image area can be accessed in more flexible ways. However, there are tiny variations within the individual electronic structures used to process each pixel, and this means that signal offset can differ from pixel to pixel within a CMOS sensor, although the amplification slopes are almost identical. Variations between the offset values of the pixels in a CMOS sensor are typically ten times larger than those of CCD sensors.
Taken together, this offset variation represents a difficulty with respect to the sensitivity threshold of the sensor. This is especially true when a weak signal that is slightly greater than the background noise must be detected. In this situation, a CMOS sensor looks worse than a CCD sensor. By definition, this threshold is reached when the signal from the sensor is as high as the noise (i.e., the signal-to-noise ratio or SNR equals one). A technical term that quantitatively describes this characteristic is known as the Fixed Pattern Noise (FPN). CMOS sensors exhibit a higher FPN than CCD sensors.
The saturation capacity is related to a second important parameter of an imaging device, the maximum signal-tonoise ratio. This parameter quantifies the ratio of a signal associated with light under optimum conditions to pure sensor noise without any light exposure. It can be shown, that in principle, the maximum signal-to-noise ratio equals the square root of the saturation capacity. Thus, the CMOS sensor excels with respect to the maximum signal-to-noise ratio, but it needs more light to do so.
As a simplified rule-of-thumb, one can say that CCD sensors are the preferred choice for applications with little light and CMOS sensors are a good alternative when there is a lot of light.