Machine learning enables traffic management for safer roads

Machine learning enables traffic management for safer roads
According to statistics from the United Nations (UN), almost 60 percent of the world’s population will live in urban areas by 2030. This is exerting a great pressure on road transportation systems. With more people and cars on the roads, global governments are facing a range of challenges in managing traffic flow to keep roadways flowing smoothly and safely.

However, thanks to recent technological advancements, more and more city governments are adopting artificial intelligence (AI) technology to analyze traffic patterns and improve monitoring efficiency, enabling automatic incident detection and timely responsiveness.

According to a report from Zion Market Research, the global video analytics market is forecasted to reach around US$11.1 billion by 2022, growing at a CAGR of 34.3 percent between 2017 and 2022. Transportation accounted for the largest market share in the verticals segment in 2016, and increasing complexities in this vertical will cause a surge in the demand for video analytics in coming years. Traffic monitoring emerges as one of leading application segments in the market due to an increasing need for actionable insights from intelligent video analytics systems.

Juber Chu, CEO of ACTi, expects machine learning to be fast-growing in the coming years because of accelerating adoption of smart technologies from the government side. The technology can be extensively applied to all transportation sectors like railway and airport applications, not just limited to road transport.
Zvika Ashani,
CTO, Agent Vi 

Constant Rutten, Marketing Applications Video Systems at Bosch Security Systems, said, “New technologies like machine learning will be a relevant part of intelligent transportation systems, greatly reducing the number of accidents which are caused by perceptional or cognitive driver overload, human error or by adverse environmental conditions. It will also lead to better planning and utilization of the transport infrastructure, less traffic jams and reduced travel times.”

These smart systems are now also aiding traffic monitoring operators and authorities extract meaningful data for actionable insights. “Our customers want to identify problems like traffic jams, accidents or slowdowns as quickly as possible, and to have a system that provides useful data for planning future expansion of infrastructure like building roads or adding traffic lights,” said Zvika Ashani, CTO of Agent Video Intelligence (Agent Vi), a company that specializes in video analytics and works with city governments in the U.S., Europe and Asia-Pacific region.

Considerations for machine learning technology

When applying machine learning, there are a few things to consider. Just like humans, those intelligent machines need to be “taught” and obtain experiences to achieve better performance. It requires a large amount of data, including a variety of patterns and scenarios for training.

The right training input

Rutten said, “The quality and quantity of the training input data used to train a machine learning system is reflected in the system’s output. If the training data is incorrectly labeled or preprocessed and normalized or not representative for the classification or detection task at hand, results will be disappointing.”
Guy Baron,
CTO, Qognify

Eric Olson, VP of Marketing at PureTech Systems, said, “It is important for integrators to understand that deep learning is exactly that — thoroughly teaching the software to learn specific patterns and scenarios. In some cases, the supplier may have already invested time to teach the software about the type of a scene in which the integrator is interested. However, if you have a unique scene or scenario, part of the installation process must take into consideration the time required for the software to be ‘taught’ the scene and the events of concern.”

Anthony Fulgoni, Chief Revenue Officer for Calipsa, said, “Machine learning is not magic. It has to be taught what to do, what to look for and what to report on. It has to be taught what is normal so it can identify the transgression.” He mentioned that it is not just the machine that needs to be taught, users also have to understand the requirements of the machine learning technology — clear camera angles help but camera occlusion may impact accuracy.

To unleash the potential of these algorithms, Guy Baron, CTO of Qognify, indicated that a sophisticated set of algorithms is not enough. It is important to couple the system with a well-defined implementation process that focuses on optimizing the conditions in which the edge sensing device operates and pours data into the algorithms. He said, “This can take form of a site survey activity to verify lighting, field of view (FoV) and resolutions of the cameras.”

Training Algorithms for Flexibility

These algorithms can be trained to adapt to varying environmental conditions. Ashani said, “You need to collect images of vehicles during rain or in foggy conditions and add those to your training patterns. The machine learning algorithms can learn the patterns and deliver accurate results in these conditions. If you don’t train the system with data from these conditions, the accuracy would probably not be high.”

Machine learning also makes it easier to add new algorithms to detect desired patterns and is more flexible in design. Juber Chu, CEO of ACTi, said, “The strong ground reflection from snow could influence the performance of computer vision based video surveillance systems. If you want to add a new algorithm for a CV-based system, it’s not easy.”
Daniel Chau,
Overseas Marketing Director,
Dahua Technology

Daniel Chau, Overseas Marketing Director for Dahua Technology, indicated that current video analysis is based on traditional intelligent analytics technologies, the logic of which is strict and often fixed. “Once the algorithms within these systems are formed, the cost of modifying them later on is disproportionate to any benefits gained from changes.

Machine learning is similar to human learning in that continuous study and correction lead to individual conclusions, enabling flexible and smooth operation when faced with many unknown variables,” he said. Machine learning provides other benefits like lower requirements of hardware system integration. Chau said, “The addition of machine learning lowers the requirements for system installation and camera angles, while at the same time being able to extract specific characteristics from vehicles, analyze the status of traffic congestion on roads.”

Machine Learning on the Edge and Cloud

To increase performance, companies have embedded machine learning algorithms in the cameras, servers or on the cloud.

Edge devices can process the data first and then send the metadata generated by the cameras through machine learning-based algorithms to the backend server instead of uploading the entire video stream. It saves bandwidth and time, and reduces the workload of local and cloud servers. Bosch Security Systems applies this technology in cameras for traffic monitoring and management. “Because the technology is included at the edge in every Bosch camera, the compute power for analytics and machine learning grows with every camera added to the monitoring system, without the need for additional server or cloud compute capacity,” said Rutten.

Baron said, “As the amount of data being produced, captured and analyzed rises, we will need more computation power at the edge available for these computation heavy algorithms. We are likely to see more and more GPU or FPGA-enabled devices being deployed to accommodate for this need.”
Juber Chu,

It’s challenging to send video data from a large number of cameras to the backend server. Ashani indicated that a main barrier today for processing video in the cloud is the bandwidth. The company’s patented distributed video analytics architecture splits the video processing task between an edge component at the remote site network and a cloud-based server, providing high analytics performance while eliminating the need to stream the video to the cloud.

“The camera does some local processing and just sends a small amount of data to the cloud for video processing, enabling uploading data from the camera to the cloud in real time. This allows the system to be able to support a large number of cameras without using a lot of bandwidth,” Ashani explained.

Baron indicated that a new system architecture style known as “Fog Computing” can help process the vast amounts of data collected by sensors. Baron explained, “System architecture concepts need to be revised and evolved. Cloud computing is the ‘new normal.’ To analyze the large amount of data, there will probably be a more hybrid model, in which data gets collected and crunched by machine learning algorithms on or near the edge devices emitting metadata, which will be forwarded to the cloud for further analysis and long-term storage.“


The technology is here to stay. However, machine learning still has a long way to go before reaching widespread adoption. First of all, the systems supporting machine learning require heavy computational capabilities, making it essential for product feature enhancements to support the algorithms. Also, a good machine learning-based system requires all the elements, like sensors and data analytics capability, to generate good results. A better understanding of the new technology is also important before system implementation to achieve higher performance for traffic monitoring and management.
Share to:
Comments ( 0 )