Optimized chips push machine, deep learning to new heights

Optimized chips push machine, deep learning to new heights
The tech world’s obsession with artificial intelligence is driving companies to develop better, more optimized solutions for running machine learning and deep learning algorithms. The latest chips are not only making AI more available to various industries, they are also driving better efficiency and increased accuracy.
When it comes to artificial intelligence (AI), 2018 is looking to be a year of significant growth. This is largely due to big steps being made in machine learning and deep learning. The deep learning market alone is expected to be worth US$1.7 billion by 2022, growing at a compound annual growth rate (CAGR) of 65.3 percent during the forecast period 2016 and 2022, according to a report by market research firm MarketsandMarkets. The report cites the major factors driving growth as the robust R&D for the development of better processing hardware and increasing adoption of cloud-based technology for deep learning. 
When it comes to the hardware market for deep learning, MarketsandMarkets predicts a high growth rate due to the growing need for hardware platforms with a high computing power to run deep learning algorithms. This has also resulted in increased competition among established as well as startup players, leading to new product developments in both hardware and software platforms to run deep learning algorithms and programs, the report stated.

Evolving chip options

There are many chip options when it comes to machine learning and deep learning applications. These include, but are not limited to, GPUs, CPUs, VPUs, FPGAs (field-programmable gate array) and ASICs (application-specific integrated circuit), all of which have machine-learning-optimized versions developed by the who’s who of technology giants.
For example, in 2016 Google announced they had developed a proprietary TPU (tensor processing unit) specifically for neural network machine learning. The TPU is an AI accelerator ASIC, specifically designed for Google’s open-source TensorFlow framework. In the second half of 2017, Huawei launched its Kirin 970 processor for mobile use, which has a dedicated NPU (neural processing unit). It seems as though every major tech company has developed its own processing unit, swapping out the first letter for new one.
In a report titled Technology, Media and Telecommunications Predictions 2018, Deloitte expects GPUs and CPUs to still be the largest part of the machine learning chip market in 2018. However, Deloitte Global predicts that by the end of 2018, over 25 percent of all chips used to accelerate machine learning in the data center will be FPGAs and ASICs.
“These new kinds of chips should increase dramatically the use of machine learning, enabling applications to consume less power and at the same time become more responsive, flexible and capable, which is likely to expand the addressable market,” said the report. As a result, Deloitte predicts chip sales for machine learning tasks to at least quadruple in only two years.
Still, though, GPUs are expected to make up the largest portion of chips used in the filed. Deloitte anticipates the market for GPUs will surpass half a million chips sold in 2018, over double that of 2016 which was estimated at 100,000 to 200,000.

Innovations by Intel

Many of the world’s largest chip makers have put considerable R&D resources into developing hardware and software components powerful enough to process the heavy data loads required by deep learning algorithm. Companies like NVIDIA, Arm and Intel have developed such solutions.
The Myriad X VPU (vision processing unit) is Intel’s most recent addition to its line of VPUs, and designed to take imaging, computer vision and machine intelligence applications into network edge devices, such as smart cameras, security, 360 cameras, drones and virtual reality (VR)/augmented reality (AR) headsets. Its predecessor, the Myriad 2 VPU had been optimized for high-performance visual intelligence with ultra-low power for drones, robotics, virtual reality and smart security solutions.
New to the Myriad X VPU is the Neural Compute Engine, a new deep neural network processing unit. According to the product brief for the Myriad X, the Neural Compute Engine was specifically designed to run deep neural networks at high speed and low power, enabling the Myriad X VPU to reach over 1 TOPS (trillion operations per second) of compute performance on deep neural network inferences.
“The Neural Compute Engine is integrated as part of the power efficient Movidius VPU architecture which minimizes power by reducing data movement on-chip. While the Myriad 2 VPU has provided superior deep neural network support at low power, the Myriad X VPU can now reach 10x higher performance for applications requiring multiple neural networks running simultaneously,” stated a brief by Intel.

Product Adopted:
Share to:
Comments ( 0 )