Join or Sign in

Register for your free membership or if you are already a member,
sign in using your preferred method below.

To check your latest product inquiries, manage newsletter preference, update personal / company profile, or download member-exclusive reports, log in to your account now!
Login asmag.comMember Registration

Seeing the potential of AI in video

Seeing the potential of AI in video
Video technologies are evolving at a strong pace, with artificial intelligence (AI) and machine learning having the enormous potential to transform video technology and fundamentally change the way we live and work.
Video technologies are evolving at a strong pace, with artificial intelligence (AI) and machine learning having the enormous potential to transform video technology and fundamentally change the way we live and work. 

For example, last year convenience chain 7-Eleven implemented the use of facial recognition video technology in its 11,000 stores across Thailand. The technology is used to identify loyalty members, analyse in-store traffic, suggest purchases and even measure shoppers' emotion. In the healthcare industry, institutions are beginning to use video analytics to improve patient care, for instance –alerting staff if a patient has gone too long without being checked, or even identifying if a patient has fallen and needs assistance. 

The use of AI-powered video technology to simplify everyday processes has already begun – from easier security checks at the airport to paying for purchases with a smile. The trend is only set to accelerate from here, with an estimated 1 billion video cameras connected to artificial intelligence platforms by 2020.

The core of AI learning – shallow or deep

An all-encompassing concept covering various things including neural networks, AI refers to the ability of a machine or a computer program to think, act and learn like humans. Due to the previous limitations of hardware processing power, machine learning – an application of AI – could only deploy shallow learning of very large data sets, which looks at data in just three dimensions. 

With recent, significant advances in processing power of graphical processing units (GPUs) such as a new coding technique known as parallelisation, we can now utilise a deep learning approach where we can look at data in many more levels or dimensions – hence the word “deep”.

Software parallelisation is a coding technique for breaking a single problem into hundreds of smaller problems. The software can then run those 100 or 1,000 processes into 1,000 processing cores, instead of waiting for one core to process the data 1,000 times. 

With parallelisation, there is a quantum leap forward in how fast we can solve a problem. Having the ability to solve problems faster allows us to go deeper with a problem and process larger, more complex data sets. As the world's data is set to grow 10-fold by 2020, being able to process data faster and deeper will become a defining factor in staying ahead of the business curve.

Bringing the potential of AI augmentation to life

AI and machine learning are being applied for AI-enabled devices and machines to master and perform low-cognitive functions. For example, humans cannot sit and watch all cameras simultaneously – our attention spans simply do not work that way. 

However, machines are extremely good and detailed at this. While we see objects, the machine sees the most finite detail available to it – that is, each and every single pixel. Within the pixel, the machine can see even more details, which are the shade of colours of that image. By aggregating data, allowing machines to automate responses and solutions, we can augment human interaction and our environment.

With AI, there will be massive advancements in how we review and utilise video and data. 

Imagine an interaction between a near-eye lens, medium-distance viewing glass and large video screens. On the small lens, there is an overlay of detailed text data, with augmented video on the medium-distance, with the big scene view on the large screen. The live video, augmented visuals, and text data will be in concert. When an individual is looking at the large screen, the data will change what based on what the individual is seeing in the near-eye screen. With this intelligent augmentation, the system will know that if a person is looking at a face, building or license plate and show related information accordingly. All of this is possible today. 

The City of Hartford in the United States is a great example of technology as a force multiplier. Working in tandem with local law enforcement and partners BriefCam and Axis, Milestone Systems was able to enhance the City's C4 Crime Centre and provide a significant upgrade to the Hartford Police department’s ability to prevent and effectively respond to incidents throughout the city.

Not only are many crimes therefore now solvable, but rather than spend 30 hours doing low-cognitive, manual tasks – such as freezing on a rooftop to monitor a drug house all day and night – officers can now sit at their desk and within just a few minutes, know exactly where a drug house is by seeing an augmented reality of foot traffic over time. 

With the enhanced system, officers can simply go into the data and extract the problem with precision and efficiency – changing the way how police work will be done in the future. 

An intelligent industrial revolution

Having machines take over low-cognitive tasks will be a significant game-changer for years to come. With proper aggregation of information, machines can be better at low-cognitive tasks and often deliver a better quality of service than humans.

Amazon is applying this to retail stores where the concept of a checkout is being replaced by customers simply walking out. By using data from smartphones, cameras, sensors, purchase histories and other data points, Amazon is making it possible for us to walk into a store, pick up what we need and walk out. Everything else is taken care of by machines. This type of thinking and tool creation is in its earliest infancy but will continue to address problems that are of more value to our lives.

In the book "The Inevitable", Kevin Keely says the next 10,000 startups will be based on finding applications for AI, in a similar way to the electrification that happened during the second industrial revolution. The intelligent industrial revolution is beginning to happen all around us. It will be very disruptive within the security and surveillance industry — but also insightful and liberating as we free human efforts to higher cognitive processes and address the larger challenges ahead.

Product Adopted:
Subscribe to Newsletter
Stay updated with the latest trends and technologies in physical security

Share to: