Uncovering WDR: Before and after

Uncovering WDR: Before and after

As network cameras journey into the era of megapixel resolution, their internal sensors are also undergoing a change, owing to the different requirements needed to achieve high definition. With its affordable price tag, CMOS sensors are now replacing the traditional use of CCD sensors and slowly becoming the “eyes” of an increasing amount of megapixel network cameras. The differences in technology used by the two types of sensors, however, often create problems which affect the low-light and wide dynamic range (WDR) performance. Luckily, with the advancement of the different combinations of CMOS and DSPdigital signal processor (DSP), WDR technology is showing great improvement.

At present, WDR surveillance cameras using CMOS + DSP combination are gradually dominating the market. From a sales perspective, the price of surveillance cameras with WDR features is slightly more costly. From a technical point of view. Technology wise, WDR is already in its third generation, also known as “True WDR”, from when Panasonic first developed this technology. “Currently in the market, any camera able to produce a dBdB range over 100 is considered ‘True WDR', and is capable of handling any WDR applications,” stated Peter Pan, Product Manager at Dahua Technology.

WDR applications have grown wider as well. The swift development of WDR made it an undeniable key technology in surveillance cameras. Under these circumstances, WDR technology ought to be fully developed and stable by now. However, the WDR feature in many cameras is mainly achieved by merging two images together. Furthermore, if the DSP chip used in combination does not have proper processing capability, it may lead to poor image recovery, poor color reproduction, blurred images, and poor definition, etc. Thus, knowing the capabilities and applications of WDR surveillance cameras is extremely important.

What is WDR?
Many users may not understand exactly what WDR is or what it does and have misconceptions about what it is supposed to do. Furthermore, most manufacturers also have their own set of definitions and terms for the different forms of WDR technology available today. Many times, one manufacturer's definition of a type of WDR technology will most likely be in direct conflict with what another manufacturer will consider it as under the same term (i.e. digital WDR, True WDR, etc.). Simply put, WDR is the ability to simultaneously and clearly display the details of both the brightest and darkest areas in a scene. In other words, WDR technology is able to display details in a scene where there are elements of high contrast. When a strong light source (sunshine, lights, reflection) and shadows, backlights, and other areas of low light source exist simultaneously in a scene, the brighter areas may suffer from overexposure and show up as a blob of white in theimage. Vice versa, the darker areas in the scene will show up as a pool of black due to underexposure. Both situations can negatively affect the image quality. However, there are limitations to what a camera can display in circumstances of extreme brightness and darkness. This limitation is often referred to as the dynamic range.

“One way to think about WDR is as an art of trade-offs. In all imaging, the main task is to reduce noise and emphasize signal. The most obvious tradeoff in WDR is between noise and artifacts. It is possible to reduce noise levels and thus allow a higher dynamic range, but it comes with the cost of new artifacts. All current WDR techniques create artifacts,” said Andres Vigren, Product Manager at Axis Communications.

It is important to highlight that the dynamic range concept is defined as the ratio of the brightest and darkest regions in an image, and not as an absolute value. This ratio can be measured by dBdB. Regular video surveillance cameras have a dynamic range ratio of 10dB, regular WDR cameras have a dynamic range ratio of 48dB, with a difference of 38dB between the two types of cameras. Third generation WDR cameras can reach up to 95dB. With current technology, the maximum dynamic range that can be reached can surpass 120dB but not higher than 130dB.

Main Sensor Combinations for WDR
WDR technology employs various combinations of image sensors and processors, which can largely be categorized into three main combinations. “The sensor is a very important component that contributes to WDR performance. The dynamic range of camera depends on that of a sensor. The pairing of chipset and sensor needs to be considered as well, for example, if we utilize the old generation ISP chipset to match a new WDR sensor, the WDR functionality may not be able to perform to its maximum ability,” said Xuehai Yu, IP Camera Product Manager at Hikvision Digital Technology.

Through these three combinations, the technology evolution and application developments of WDR technology can be traced. Ask anyone in the security industry and they know that prior to applying WDR technology in surveillance cameras, there had only been features such as low lux, filters, polarize, autofocus, and BLC to tackle problems for environments with changing light patterns. Unfortunately, these previous technologies have limited capabilities. With WDR technology, it compensated for what previous technologies were unable to achieve, and went above and beyond what they can dowere able to do. The three main combinations are dissected in the following.

CCD Sensor + DSP
The first combination pairs a CCD sensor with a DSP. This form of WDR technology is also known as digital WDR. In other words, this combination uses a multi-exposure method, consisting of short and long exposure speeds. The first exposure is done to the bright areas in the scene to provide an image where details of the bright areas can be clearly seen. This image is saved into the buffer zone of the RAM. The second exposure is a slower exposure that targets the dark areas of in the scene and provides an image with details of the dark areas, which is saved into the same buffer zone where the first image had been saved. After both exposures have been performed, using DSP for image processing, the two images are stacked together, resulting in an image where details are visible in both the bright and dark regions of the scene.

The second combination is taken from Pixim's technology development based on a new image capturing system from the CMOS in the 90's, known as the digital pixel system (DPS). For WDR, DPS uses a process where pixels are individually exposed, along with control technology from an ARM 7 CPU. In comparison with the multi-exposure method used by a DSP, this combination is able to provide a wider dynamic range. From the ratio standpoint, the image processing power of surveillance cameras using the CMOS sensor + DPS combination can reach a dynamic range of 95dB, some can even reach above 120dB. By using DPS, problems that occurred from using CCD sensors (image discoloration or limited processing range) have also been resolved; its ability to accurately reproduce true-to-life color helps to meet application requirements.

The CMOS + DPS combination assumes the role of the eye and brain, and mimics how the two function together, allowing the image processor and the image sensor to have a two-way, real-time interaction. When the DPS is processing an image, it will simultaneously transmit signals to the image sensor. Not only does this adjust the exposure time, but also changes the image capturing algorithms in order to achieve intelligent image capturing. Therefore, under specific lighting conditions and environments, surveillance cameras using DPS can ultimately provide a more detailed, more complete, and more realistic image.

However, DPS is a technology incorporating the exposure of individual pixels. Therefore, each pixel contains a complete set of processing circuit, effectively decreasing the photosensitive area of each pixel. With the decrease in the amount of light received, there is a significant difference in sensitivity for cameras using the CMOS + DPS combination.

Sony Effio-P WDR
Compared to the two combinations listed previously, the Sony Effio series surveillance cameras have even more practical results. The Effio-P is a WDR solution introduced by Sony during the last two years. Based on our understanding, Tthe Effio-P can pair with the latest technology sensors and support CCD dual scanning to fully achieve true WDR, allowing it to clearly depict images in scenes with backlight or extreme lighting situations.

Differences in Performance
Though WDR cameras utilize mostly the same types of chip-sensor combination, camera performances still vary greatly from one another. “Just having a better sensors (front-end) does not mean you can achieve the best WDR performance, because signal processing system (back-end) matters a lot to have obtaining the best WDR result. For example, Sony sells Exmor CMOS sensors to many camera manufacturers, but no one achieves the same performance as Sony's technology,” said Miyamaki Hideo, Senior Manager Head of the System Engineering Department in the Visual Security Solutions Business Development Office at APAC, Sony Electronics.

Another major factor is the firmware used in conjunction with the CMOS. “Even if the sensors for these WDR cameras are the same, if the ISP or DSP paired with the sensor is different, there will be differences in the WDR performance. The firmware used can enhance the image, mainly by changing the algorithms of the ISP, but the most direct obvious differences lies in the previous hardware solutions. As WDR cameras generally process larger image files than regular cameras, naturally, they will have higher requirements for the DSP. Some manufacturers will incorporate another chip to help with processing the image. Using 3- megapixel WDR cameras from Dahua as an example, we added an FPGA chip to help enhance the image. Therefore, to ensure the best WDR feature, you first have to have a good design, as the firmware will adjust accordingly based on the design,” said Pan.

WDR performance can also vary based on default settings of the camera. “Each manufacturer has its his own definition of how images should be presented, and has a different preference for WDR performance in real application scenarios. Therefore, even if they use the same combination of sensor and ISP, the WDR performance may have significant differences based on the manufacturer's default settings,” said Yu.

According to Arnaud Lannes, Product Marketing Manager at Bosch Security Systems, manufacturers are also using theoretical values based on the bit processor to communicate WDR, but as the technology developed, Bosch has developed its own measurement method that can accurately measure the value.

Some users may be concerned whether WDR performance will provide the same results depending on if the camera is IP-based, analog, or HD-SDI. According to Hideo, “the difference is whether they make use of CMOS or CCD. Recently, cameras based on CMOS have the widest dynamic range. Technically, the type of interface does not matter for WDR performance.” Agreeing with Hideo, Vigren pointed out that it doesn't matter whether users select SDI, network, or analog cameras. “They all face the same challenges. When selecting a camera, the most important thing is to try both non-WDR and WDR solutions to understand the real need. Sometimes the WDR camera is not the right solution.”

“The WDR of the camera is linked to the sensor capability and the quality of the image processing. It is the reason why WDR should not be dependent on the fact that the camera is SDI, IP, or analog. However, the breakthrough of IP technology offers much more possibility to enhance the image performance by using analytics algorithms to tune dynamically the image processing,” Lannes stated.

WDR Video on TV
In order to properly display WDR video, some considerations must be taken into account. Regular TV monitors are unable to display WDR imagery as they are limited to a dynamic range 200 to 300 times narrower than that of a WDR camera. To address this issue, the WDR image is put through a nonlinear image processing process known as tone mapping. This process reassigns pixel brightness values to achieve the reduction of global contrast while preserving the local one. “Currently, most local manufacturers use local tone mapping technology, which helps to optimize each image pixel according to local image character, also to adjust image clarity in both the dark and over-exposed areas,” stated Yu. In this way, the overall appearance of the scene will remain perfectly acceptable to the human eye, while the image becomes displayable on a monitor.

Quality via Brand
Although WDR technology is developing and improving rapidly, it is still unable to compare with the capabilities of human observation as the low-lux point for WDR cameras is still pretty high. The darker it is, the worse the WDR technology will perform. Most WDR cameras still need to incorporate BLC to raise its ability to capture objects in environments with challenging lighting. Other limitations include limited available shutter ratios due to certain technological constraints, as well as motion artifacts. “If the image is built up of consecutive frames, there will always be visible defects in case of object motion for security cameras,” said Vigren. Hence, only cameras from some of the leading brands will garner the affection of the users.

It is important to note that people often confuse a camera's backlight compensation (BLC) feature with WDR, but the two can actually be differentiated quite easily. In the field of view of a conventional camera, when it is in the process of capturing a target object, such as something at the door or outside the window, there will be a very strong source of light in its background. Regular cameras are extremely limited in terms of balancing the brightest and darkest regions in an image. Usually, they tend to use the average ratio of all incoming light as reference in order to determine the level of exposure needed. The solution used to solve these issues is known as BLC. It uses centered BLC, mainly to enhance the level of brightness in the central point of view and suppress the brightness around the object in order to clearly see the target. Yet, under these circumstances, the camera is unable to simultaneously display clearly what is in the background and what is in the foreground.

Share to:
Comments ( 0 )