Join or Sign in

Register for your free asmag.com membership or if you are already a member,
sign in using your preferred method below.

To check your latest product inquiries, manage newsletter preference, update personal / company profile, or download member-exclusive reports, log in to your account now!
Login asmag.comMember Registration
https://www.asmag.com/project/resource/index.aspx?aid=17&t=isc-west-2024-news-and-product-updates
INSIGHTS

VideoIQ: Demystifying self-learning video analytics

VideoIQ: Demystifying self-learning video analytics
VideoIQ introduces the B.R.A.I.N model which will enable cameras that are programmed to mimic this neural network model, which “learns” through interactions with the environment.

When speaking of animate vision it is also referred to as response driven learning process where the lessons are learnt from others, learning through corrections and feedback, the more interactions with “teachers”, the faster the learning process. Response and feedback can also come from interactions with the environment. The typical example of a hot stove or a thorn, by injury it is identified as a mistake.

Whether through concepts from bootstrap process or from response process, the learning process is continuous. When an object is “seen”, there is a complex and sophisticated neural network that is continuously learning behind the scenes that enables the gift of sight to be taken for granted.

VideoIQ introduces the B.R.A.I.N model which will enable cameras that are programmed to mimic this neural network model, which “learns” through interactions with the environment. When a camera is first powered on, its field of view is new and unfamiliar, much like humans in an unfamiliar environment, instinctively attempts to identify everything that seems recognizable in form or function, overtime becoming familiar with the environment recognizing placement. Thus if additional items were to appear in the line of vision, the camera might first classify it as a suspicious object. However should the item be still in the environment over a period of time, the camera will “learn” that it is where it belongs, overriding the initial placement of items. Not only does it “remember” or “learn” placement, it can also identify objects through repetitive patterns, example, car movement or human movement, it refines its notion of how its form appears in its line of vision.

While learning from the environment, it is very effective in increasing camera intelligence and thus decreasing false alarms. With the introduction of teach by example feature, operators can mark alarms as “true” or “false”, allowing the camera to “learn” from its mistakes and therefore decreasing its chances of making the same mistakes in the future. The data becomes useable once the camera has seen at least 30 true/false alarms, however, the larger the database or the more problems attempted, the more effective the teaching session becomes.

It is rumored that the VideoIQ B.R.A.I.N can manipulate camera's sensitivity level to find that happy medium. The operator adjusts the sensitivity knob, showing how well the camera would have performed at that sensitivity level over the same sample. Increasing the sensitivity may raise the number of false alarms though the increase in true alarms may outweigh this, vice versa. With a baseline for easy comparison, it only takes a few slides of knob to figure out the ideal level for each camera.

Subscribe to Newsletter
Stay updated with the latest trends and technologies in physical security

Share to: