https://www.asmag.com/rankings/
INSIGHTS
There has been significant debate in recent years about the surveillance market finally reaching the tipping point into IP technology. In reality, it has only been within the last 12 months that the emergence of megapixel camera technology has made this view possible. This feature uncovers the implications of the technology in terms of storage requirements, and outlines an overall solution design of the (near) future.

Tipping the Megapixel Equation

Date: 2011/08/23
Source: Submitted by Catalyst Communications

A GROWING ELEPHANT IN THE ROOM
As Howie sees it, the current market will have to become more standardized for complete adoption to be reached. “I'm sure it will over time. However, the current status is still some way away.” ONVIF, for example, is still some distance away from generic integration between complaint parts. “It has certainly not reached the concept of plug-and-play, as many would assume when they see ONVIF compliance,” Howie cautioned.

Network cameras are effectively small network devices similar in many forms to PCs, embedded with all the complex software/firmware. Recording and display systems must communicate with each of these devices in a common language. Reaching the ONVIF standard for most is often no more than a basic communication to receive video over the network. In practice, if you do not have full control of the camera from the recording and management system to use the features you choose, then it is not fully integrated regardless of which standard it says it meets. To get the integration you really need, it means ensuring full compatibility on an individual basis and literally ignoring statements of compliance.

The complexity of integrating IP systems looks to be leading to a future elephant in the room when it comes to continuous support and future upgrades. Consider this in the context of deploying an engineer to a site where s/he determines that the megapixel camera must be replaced. He now has to make sure that the replacement is compatible with the recording and management system already in place. He could potentially be looking at replacing the camera and upgrading all recording/display systems just to get back to the status quo of viewing and recording the scene of the failed camera while ensuring all other elements of the system remain unaffected by possible software changes/upgrades.

While the adoption of megapixel technology is increasing, you could argue that its integration status within projects is in the phase of beta “field testing” to prove its reliability. This is necessary in order to gain a widespread pool of experience. IP surveillance systems by their very nature are not standard. The only real commonality between all components is the fact that they communicate using the same network protocol; everything else is bespoke on a manufacturer-bymanufacturer basis.

While there are clear and tangible requirements for megapixel technology in certain applications, it could be considered totally unnecessary for many projects, Howie said. “As always, there is a strong element within the market of wanting new technology for the sake of it, rather than there being an essential need for it. Personally, I see the advantages of megapixel cameras as giving a much better platform for the retrospective element of reviewing and investigating incidents. You get the clarity of details down to minute elements within the scene, delivering a level of quality and definition you can't get with a standard camera signal. That is a huge advantage.”

SELECTING THE RIGHT SOLUTION
There is ongoing debate on the evolution of SAN and NAS systems in the IT world, according to Howie, and this needs to be reflected in the security industry. The volume of storage required continues to increase; the resolutions, frames rates and bit rates are all getting higher. A storage management solution is, therefore, a critical element within the overall system in both design and serviceability terms. Design of a robust storage solution from the outset should include a RAID specification to ensure maximum fault tolerance. Once you have created a RAID volume and tied that into a SAN or NAS, you would have a complex job on your hands if you misspecified it, especially if you need to expand it. If you are looking at 20 to 40 TBs of storage underrun, it is not the type of thing you simply whip out and ship off to the manufacturer to fix. Once you have put the infrastructure in place, it needs to be deployed with long-term operation in mind (at least seven to eight years).

“Having installed PBs of RAID storage within many business-critical applications, we strongly recommend RAID 6 with a conventional hot-swap disk arrangement,” Howie said. “In practice, we see the noted limitations and degradation of a RAID-6 set as minimal. One thing to bear in mind is that hard drives do and will fail. Using RAID 6 with a hot spare means that the spare is immediately brought online without any manual intervention to the system, giving you time to get the failed disk replaced while ensuring maximum protection.”

One further underutilized advantage of SAN and NAS is the inherent remote monitoring capability such systems provide. This capability offers a proactive approach to observing system health, acting as an early warning indicator to potential issues. It seems logical to utilize the automation of remote health monitoring on your most critical assets. Perhaps it is due to surveillance systems never being considered business-critical until they stop working! This is an attitude that needs to change in line with the complexity and the dependence associated with the latest generations of megapixel systems.


https://www.asmag.com/resource/form.aspx?id=77