Arm dedicates resources to AI and machine learning

Arm dedicates resources to AI and machine learning
Finding the right software and hardware components to properly run computing-intensive machine learning algorithms has long been an obstacle of AI. Now, leading companies are putting considerable resources into deep learning R&D to address these issues. One of them is Arm, which has built a solid reputation with its CPUs and its mobile GPUs. Now the company is also developing processors specifically for artificial intelligence and machine learning.
 
In 2017, Arm began adding targeted deep learning support to its Cortex CPUs and Mali GPUs. However, more recently in February of this year, Arm announced Project Trillium, a new line of processors specifically dedicated to AI and machine learning.
 
When Arm was researching processors for deep learning applications, it began by evaluating the compute workloads and how its current processors, as well as competitive approaches, performed. “However, it’s not just about the processors,” explained Jem Davies, Fellow, VP and GM of Machine Learning Group at Arm, “it’s actually about a continuum of scalability, flexibility and options. Deep learning isn’t something that’s only going to apply to a particular class of device with particular performance, power and cost requirements. Rather, deep learning will apply, to greater or lesser degrees, across the entire spectrum of computing over time.”
 
From these evaluations, Arm concluded that for some applications, a completely new architecture designed from the ground up was required to address these deep learning workloads. As such, Project Trillium was born — a suite of Arm IP including new highly scalable processors and neural network software libraries that deliver enhanced machine learning and neural network functionality. It includes the Arm Machine Learning (ML) processor, an ML processor ideally suited for deep learning applications.
 
“A large number of targeted deep learning optimizations were added to this ground-up designed machine learning processor. The Arm ML processor achieves the highest performance of 4.6 TOPS (trillion operations per second) and a stunning efficiency of 3 TOPS/W for mobile devices and smart cameras for inference on-device at the edge,” Davies explained.
 
“Each machine learning application has specific data it’s trying to address and even within an application the amount of data to process can vary greatly. One processing approach to deep learning applications seldom meets specific scenarios. Instead a flexible and scalable platform approach to processing that offers users choice and flexibility will best meet the demanding variety of deep learning applications,” Davies said.
 
For video surveillance purposes, Arm has a range of processors that includes its Arm Machine Learning processor and the Arm Object Detection (OD) processor. Davies explained the Arm OD processor is ideally suited for security cameras since it delivers real time, full HD at 60 frames per second, at low power.


Product Adopted:
Other


Share to:
Comments ( 0 )