How Image Enhancement Work in Real Life
a&s International | Date:
While TV episodes and Hollywood movies are fiction, what if there really was technology that could clean up bad images? The primary objective is identification, whether it is of people or license plates, to track down suspects. No matter how poor the image is, image enhancement magically produces HD images ready for identification.
However, security cameras are not always set up for identification purposes. Most people counting applications mount cameras overhead, making it impossible to capture a face. In traffic monitoring, roadside cam r s watch for flow; cars are only identified by high-resolution LPR cameras when stalled at toll stations or exit ramps. Camera placement and objectives will affect whether an image will be usable or not. Even if a face is recorded, it may not lead to an arrest, as in the case of the 2010 Dubai assassination.
It may not be possible to make a thumbnail clear as day, but there are real ways to improve images. “We are able to zoom, and we're able to enhance,” said Joelle Katz, Marketing Manager at Brivo Systems, in a prepared statement. “But don't count on CSI's pseudo-scientific enhancement to be available any time soon.”
Traditional government users in federal, military and intelligence agencies benefit most from enhancing security video, but the applications are limitless. “We have folks in academia that use our software for projects they're working on,” said Benjamin Solhjem, PM of Motion DSP. “We have retail customers, such as Target and Wal-Mart, who use image enhancement for loss prevention. There's a lot of demand for video enhancement in any application that has use for a camera.”While image sharpening technology exists, awareness and demand are limited. “We have never had a request for this, though we have had requests for some other things that people see on TV shows,” said Bob Mesnik, President of Kintronics, a US distributor.
How Things Work
Image enhancement for still images is all about amplifying the image signal. “Enhancement of a still picture can be accomplished using compressed sensing,” Katz said. “It's a mathematical tool capable of creating high-resolution photos from low-resolution shots. At the very basic level, it works by repeatedly layering colored shapes into the areas where there are missing pixels to achieve what's called sparsity, a measure of image simplicity.”
While compressed sensing is still being researched for radar and medical imaging, noisy and grainy video can be cleaned up with commercially available tools. Adobe PhotoShop and Topaz Enhance tools reduce noise in a number of ways: Spatial noise reduction in each frame, temporal noise reduction between frames and combining both methods in spatial-temporal noise reduction, Katz said.
Motion DSP employs spatial temporal noise reduction algorithms, but cautioned that image enhancement is just a tool rather than a magic bullet. “‘CSI' will show a totally crappy image the size of my finger; then blow it up to be better than 1,080p. That's a misnomer,” Solhjem said. “But you can, utilizing certain algorithms, try to eliminate the bad data there and increase the level of information. It does not increase the resolution per se, but makes it easier to see what the image looked like when it was imaged by the camera.”
Image enhancement also has to deal with compression, which reduces the number of usable pixels for analysis. “Bear in mind that these programs work best with the highest resolution pictures they can get,” said Dave Gorshkov, CEO of Digital Grape and Chair of the CCTV and VCA Technical Standards Working Group for the American Public Transportation Association. “What you find with the current generation of network cameras is that the analytics are done on the native image in the camera, using a dedicated DSP. It is not done at the control center, because the image needs to be compressed and then sent to the control center over a low-speed backhaul network. This compromises the type and complexity of VCA able to be done in realistic time frames at the camera, as more complex analysis done with powerful computers that are server based can't be put in network camera because of program size, processor power requirements and associated ‘on-cost' of such a camera.”