You are unable to edit this page, please log in to edit .
This page seems to be incomplete!
46%
Please help us to improve this report - add or edit content. Top editors receive sponsorship revenues that this report may get. (see all pages having same badge)

Asia-pacific Deep Learning Processors For Data Center Market, Forecast To 2022

The Deep Learning Processor for Data Center is an emerging and upcoming market showing immense potential for growth. The APAC region is expected to post highest CAGR in the near future.

  • Page views 1950 views
  • Page contributors 2 Editors
  • Page update date Updated 8 months ago

Definition / Scope

  • The modern data center (DC) is a complex interaction of multiple mechanical, electrical and controls systems. The sheer number of possible operating configurations and nonlinear inter-dependencies make it difficult to understand and optimize energy efficiency. It has been observed that deep learning is an effective way of leveraging existing sensor data to model DC performance and improve energy efficiency.
  • Deep Learning includes 2 learning phases which includes training and inference. It is expected that increasing deep learning workload, both training and inference, in the data centers for internal usage and of providing computing power as a cloud service.
  • The support of hardware and processors with better compute performance and power-efficiency are critical to accelerate the development of deep learning in addition to big data and advanced algorithms.
  • The Global data center accelerator market was valued at USD 1.60 billion in 2017 and is expected to reach USD 21.19 billion by 2022, at a CAGR of 49.47% during the forecast period. In this report, 2017 has been considered as the base year, and the forecast period is from 2018 to 2022.
  • The market has been segmented on the bases of processor type, application, type, and geography. The data center accelerator market, by processor type, has been segmented into CPU, GPU, FPGA, and ASIC. The market for FPGA is expected to grow at the highest CAGR during the forecast period. The growth can be attributed to the increasing adoption of FPGAs for the acceleration of enterprise workloads.
  • By application, the market has been segmented into deep learning training, public cloud interface, and enterprise interface. The market for enterprise interface is expected to grow at the highest CAGR during the forecast period. Amazon Web Services (US) has partnered with the on-premises platform providers such as Intel (US), Microsoft (US), and VMware (US) to develop hybrid capabilities across storage, networking, security, and management tools; and to deploy the applications to make the integration of cloud services easy.
  • The Global Data Center Accelerator market is expected to grow from a Market Cap of USD 1 Billion in 2016 to USD 10 Billion in 2020 clocking a whopping CAGR of 75%.
Primary participants5.pngData center - market size.PNG

Market Overview

  • The deep learning processor for data center market in APAC is expected to grow at the highest CAGR during the forecast period(2017-2022) owing to the growth in China, which can be attributed to the increasing demand for data centers in China, as organizations seek enhanced connectivity and scalable solutions for their growing businesses. Also, there is an increase in the investments by the Government of China to stimulate technological development, which has led to the large adoption of cloud-based services, big data analytics, and the Internet of Things (IoT).
  • The data center accelerator market for HPC data center in APAC is expected to be valued at USD 3.2 billion by 2022, growing at a CAGR of 42% during the forecast period.
  • Data center accelerator market for cloud data center in APAC was valued at USD 540 Million in 2017 and is likely to reach USD 2.8 billion by 2022, at a CAGR of 45.2% during the forecast period.
  • With the continuing investments from both private players and market participation from the emerging economies of APAC such as India and Chine the deep learning processors for data center market is poised to bolster in growth.
  • The large-scale cloud service providers are aggressively supporting the deployment of cloud-based data centers because of the rapidly increasing businesses in the emerging markets of APAC such as India and China.
Deep-learning-market3.jpg

Key Metrics

Metrics Value Explanation
Base Year 2018 Researched through internet


Top Market Opportunities

Artificial Intelligence Helping Data Centers Become Energy Efficient:

  • Deep Learning offers opportunities to improve efficiency and reduce energy consumption by using existing data and real-time monitoring. With Deep Learning, workloads can be distributed across servers to maximize productivity and solve network congestion issues.
  • Deep Learning can also be used to control the data center environment in real-time, such as cooling systems, to reduce energy consumption.For Instance, Google is already implementing AI to monitor the data center environment and reported that DeepMind AI was responsible for reducing Google’s data center cooling bill by 40 percent.

Using Deep Learning for Server Optimization:

  • Data centers have to maintain physical servers and storage equipment. Inefficiencies in server usage mean leaving money on the table. Deep Learning-based predictive analysis can help data centers distribute workloads across the servers. Latest load balancing tools with built-in Deep Learning capabilities are able to learn from past data and run load distribution more efficiently. With AI-based monitoring, companies are able to better track server performance, disk utilization, and network congestions.

Using AI for Data Center Security:

  • Data centers have to be prepared for cyber attacks and threats. But the landscape of cybersecurity is ever changing. It’s difficult for human beings to stay up-to-date on all the information.Machine learning and deep learning applications can help data centers adapt to changing requirements faster. A British company Darktrace is using machine learning define normal network behavior and then detect threats based on the deviation from that norm. Generally, data centers have tried to deal with threats by restricting access and create impenetrable walls. But with a constant flux of users, this approach of restricting access has never been enough to ensure security. The more dynamic approach of AI-based systems can help data centers be more secure without imposing stringent rules on their users.

Market Drivers

Dependency on Multimedia Content: There is an increased demand for consuming Multimedia Content with the increasing medium for creating the same.The Dependency on Multimedia content is pushing for the adoption of Deep Learning processors for Data Center

Development of Parallel Computing: The ability of processing multiple tasks and with the increasing complexity of computing tasks there is an increased need for adoption of Deep Learning Processors for Data Center.

Incoming Investments on Acquisitions and Startups: There has been a positive outlook in the technology sector and this has pushed both the government and the private investors to invest on emerging technologies including deep learning processors for data center, cloud computing and Internet of Things (IoT)

Growth of Cloud Services: The growth in Cloud Computing segments including Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) is expected to fuel the growth in the adoption of Deep Learning Processors for Data Center.

Growing Demand for Machine Learning in HPC Data Centers: The efficiency of Deep Learning Processors for Data Centers has enabled the adoption of the same.

Market Restraints

Limited AI hardware experts:

  • The major restraint for the data center accelerator market is the limited AI hardware experts. Integrating AI solutions with existing systems is a difficult task that requires well-funded in-house R&D and patent filing. Additionally, professional services of data scientists and developers are needed to customize existing machine learning-enabled (ML-enabled) AI processors. AI being a technology that is still in its early stage of life cycle, a workforce possessing in-depth knowledge of this technology is limited.

The Burden of Managing Larger Chunk of Data:

  • The big data problem quickly arises with Deep Learning because you need to manage and store all of the operational log data in order to analyze it. Plus most corporate compliance mandates require that you store data three years. The log data that is the input to these Deep Learning systems quickly becomes a larger data set than the application data itself.

Premium Pricing:

  • The Premium pricing of Deep Learning Processors has been a short-coming for Small and Medium Enterprises (SME) or the adoption of Deep Learning Processors for Data Centers.

Scale of the number of compute resources:

  • The sheer scale of the number of compute resources available during the low utilization periods leads to fundamentally different distributed training approaches, imposing a few challenges

Industry Challenges

The Shortage of skilled labor is an Industry Challenge in this sector as deep learning is a new technological advancement and hence the need to train and incorporate the technology in the professionals serves as an headwind for the growth of the technology.

Technology Trends

The Emergence of GPU with boosted Memory:

  • Graphics Processing Units or simply called GPU, are specialized processors originally created for computer graphics tasks. Modern GPUs contain a lot of simple processors (cores) and are highly parallel, which makes them very effective in running some algorithms. Matrix multiplications, the core of Deep Learning right now, are among these.
  • For Instance, The Tesla V100 GPU has received a memory boost from 16GB to 32GB, which will help data scientists train deeper and larger deep learning model, and improve the performance of memory-constrained HPC applications. Tesla V100 is engineered to provide maximum performance in existing hyperscale server racks. With AI at its core, Tesla V100 GPU delivers 47X higher inference performance than a CPU server. This giant leap in throughput and efficiency will make the scale-out of AI services practical.

Alternative options for GPU:

  • Microsoft Brainwave which overcomes the custom ASIC approach of the TPU for an FPGA implementation that is “designed for real-time AI, which means the system can ingest a request as soon as it is received over the network at high throughput and at ultra-low latency without batching. Microsoft recently announced that its Brainwave accelerator will be used to power the Azure Machine Learning Service to provide “real-time AI calculations at competitive cost and with the industry’s lowest latency, or lag time.”
  • Intel Deep Learning Inference Accelerator is another FPGA-based product designed into an accelerator card that can be used with existing servers to yield “throughput gains several times better than CPU alone.”
  • Graphcore IPU is a processor designed for machine learning workloads using a graph-based architecture that accelerates both model training and inference by what the company claims is one- to two-orders of magnitude over other AI accelerators based on company-run benchmarks (caveat emptor).
Nvidia-deep-learning.jpgTesla v100.PNG

Regulatory Trends

  • There has been an increased participation from the Governments of the APAC region to drive digital participation of the citizens. Large scale Investments are flowing down from the Governments of the countries such as China and India and other emerging economies such as Taiwan, Singapore and Malaysia for the e-governance and digital drive of the countries. These factors acts as driver for the market.
  • Policies such as 'Cloud First' and 'Startup India' of the Government of India is poised to drive the adoption of advanced technologies such as Deep Learning Processor for Data Center in India. Other countries are not far behind as the adoption of 'Cloud First' Strategy by the countries such as Singapore and Philippines are sure to fuel the growth of the market.

Market Size and Forecast

  • The APAC market is the fastest growing in terms of CAGR for the Deep Learning Processors for Data Center,clocking a CAGR of 45.4% and Total Revenue of USD 580 Million owing to the growth in China, which can be attributed to the increasing demand for data centers in China, as organizations seek enhanced connectivity and scalable solutions for their growing businesses. Also, there is an increase in the investments by the Government of China to stimulate technological development, which has led to the large adoption of cloud-based services, big data analytics, and the Internet of Things (IoT).
  • The data center accelerator market for HPC data center in APAC is expected to be valued at USD 3.2 billion by 2022, growing at a CAGR of 42% during the forecast period. The projected growth of the market in APAC during the forecast period is attributed to the rising demand from higher education institutes, scientific research institutes, weather forecasting departments, governments, and other enterprises to store and scale-up big data.
  • Also, to achieve HPC goals, the companies such as Hewlett Packard Enterprise Development LP (US) and Intel Corporation (US) are making tie-ups to launch research centers in India. For instance, in October 2016, Hewlett Packard Enterprise Development LP (US) signed a partnership with Intel Corporation (US) to launch a Centre of Excellence (CoE) in India to demonstrate HPC-as-a-service to Customers.
  • Data center accelerator market for cloud data center in APAC was valued at USD 540 Million in 2017 and is likely to reach USD 2.8 billion by 2022, at a CAGR of 45.2% during the forecast period. The emerging markets such as India and China are attracting more businesses, thus large-scale cloud service providers are aggressively supporting the deployment of cloud-based data centers. Additionally, the cloud providers account for ~540 MW of critical IT load capacity in the market in Singapore, with the majority of this capacity residing in their own data centers.
Capture14.png

Market Outlook

  • The Deep Learning Processor for Data Centers is a growing market showing great promise for future growth and it can be considered as a Sunshine Market in the Data Center segment
  • The Deep Learning Processor for Data Center is poised to grow at a healthy CAGR of 45.2% for the period 2018-2022 driven by the factors such as increasing investments in Data Center technologies,the rising demand from higher education institutes, scientific research institutes, weather forecasting departments, governments, and other enterprises to store and scale-up big data.
  • The growth of the market in APAC is attributed to the increasing demand for data centers in China, as organizations seek enhanced connectivity and scalable solutions for their growing businesses. Also, there is an increase in the investments by the Government of China to stimulate technological development, which has led to the large adoption of cloud-based services, big data analytics, and the Internet of Things (IoT)
  • It is expected that 20% of the cloud accelerator TAM in 2020 to be driven by deep learning “inference” market while 80% of the TAM in 2020 will be driven by deep learning “training” market.
  • There is an incremental market among private enterprise deploying artificial intelligence and business intelligence to mine data for actionable information and intelligence. Such enterprise customers could choose to deploy their private cloud or on premise equipment, or buy it on on-demand basis from public cloud vendors such as Amazon or Microsoft.
Table - tam2.png

Technology Roadmap

FPGA to handle limitations of GPU:

  • The FPGA as an accelerator for the Data Center is expected to replace GPU as an accelerator for the Data Center because of the following reasons:
  1. faster time to market –no layout, mask steps;
  2. no upfront non-recurring expenses (NRE);
  3. simpler design cycle (software handles routing);
  4. more predictable project cycle (due to elimination of potential re-spins, wafer capacities, etc.; and
  5. field programmability – customers can program a FPGA remotely even on a daily basis.

Nascent data center market opportunity

  • The new market identified for the FPGA is the data center where it is expected to function as an accelerator, similar to a GPU. Microsoft was the first to consider the use of FPGA for acceleration largely driven by the need to build a scalable deep learning infrastructure but still uses GPU in many servers (about 5-6% of servers run deep learning). Baidu has also evaluated FPGA as a potential method to accelerate SQL (database) at scale. In our view, large companies evaluate multiple options over a period of time and optimize the hardware based on workloads. FPGA could be one of them.

The Numbers on FPGA Market:

  • While it is hard to estimate the overall TAM for FPGA, It is assumed that FPGA could add another $300-400mn TAM (15% attach rate to deep learning servers and $200 ASP). It is assumed that CPUs can do inference workloads in servers and will be aided by FPGA/ASICs as needed. Intel believes that FPGA will be in one third of all cloud servers by 2020. Based on Microsoft’s recent announcement that all new Azure servers have a FPGA accelerator card included and given that Azure accounts for 10-15% of total industry servers, it appears likely that FPGA could be used in one-third of the servers by 2020.

Competitive Landscape

  • NVIDIA’s massive data center business and the revenues generated by it notwithstanding, we are still in the early days of AI accelerators where multiple architectural techniques and product designs are vying for developer mindshare and a slice of IT spending.
  • NVIDIA undoubtedly has enormous leads in the number of deployments, developers and software packages using its GPUs and CUDA platform. However, the emergence of new types of machine learning algorithms, high-level software frameworks and cloud services that hide architectural details behind an API abstraction layer could rapidly change the competitive landscape. Indeed, frameworks like TensorFlow, Caffe2, MXNet and others still uninvented could drain the competitive moat that NVIDIA believes it has built in via the CUDA platform and APIs.
  • The rampant search for faster, more efficient hardware to run AI software presents some vulnerability for the venerable GPU.
Data-Center-NVIDIA-1.png

Competitive Factors

The Competitive factors in the Deep Learning Processor for Data Center market includes:

  • Innovation: The need of higher 'Innovation Quotient' among the processor manufacturers is an important factor fueling the growth of the Industry.
  • Pricing: The Deep Learning Processor for Data Center is a price sensitive market,hence the companies having a great pricing strategy is assumed to reign the market share.
  • Efficiency: Efficiency `is an important factor for the manufacturing of the Deep Learning Processors, the companies that focus on improving the efficiency of the processors is expected to succeed in the market.

Key Market Players

The Key Market Players in the Deep Learning Processor for Data Center includes:

  1. Nvidia
  2. Google
  3. Intel
  4. IBM
  5. Xilinx
  6. AMD
  7. Microsoft

Strategic Conclusion

At the moment, Deep Learning is looking promising for the data center industry. The rise of DL-based applications will increase the demand for colocation service providers. Global data centers and colocation service providers will step up their game to meet this demand. And DL-based applications will help these data centers run efficiently to provide better service to their customers. It is important to choose global data centers and colocation service providers who can help provide cost-efficient services through the use of latest technology for energy efficiency, optimization, security, compliance, and disaster recovery.

Further Reading

  1. https://store.frost.com/asia-pacific-deep-learning-processors-for-data-center-market-forecast-to-2022.html#section3
  2. https://thebusinessinvestor.com/deep-learning-processors-for-data-center-market-is-set-for-a-potential-growth-worldwide-excellent-technology-trends-with-business-analysis-intel-nvidia-xilinx-amd-google-ibm/16893/
  3. https://www.marketsandmarkets.com/Market-Reports/data-center-accelerator-market-48984803.html
  4. https://www.datacenterknowledge.com/machine-learning/nvidia-shrinks-deep-learning-data-center
  5. https://medium.com/@mrubash1/deepdream-accelerating-deep-learning-with-hardware-5085ea415d8a
  6. https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42542.pdf
  7. https://datacenternews.asia/story/toshiba-develops-a-high-speed-algorithm-for-deep-learning-processors
  8. https://blog.equinix.com/blog/2018/12/05/a-practical-guide-to-artificial-intelligence-for-the-data-center/
  9. https://insidebigdata.com/2018/05/02/machine-learning-solves-data-center-problems-also-creates-new-ones/
  10. https://www.datacenterdynamics.com/opinions/the-future-of-ai-in-the-data-center/
  11. https://www.telehouse.com/2018/01/data-centers-and-artificial-intelligence/
  12. https://datacenterfrontier.com/ai-shaping-data-center-industry/
  13. https://hackernoon.com/trends-and-challenges-in-cloud-computing-with-deep-learning-33c23e9201a9
  14. https://blog.inten.to/hardware-for-deep-learning-part-3-gpu-8906c1644664
  15. https://www.prnewswire.com/news-releases/deep-learning-market-size-worth-102-billion-by-2025--cagr-521-grand-view-research-inc-622483734.html
  16. https://diginomica.com/2018/05/17/competition-hardware-machine-learning-market-take-2/
  17. https://www.nvidia.com/en-us/data-center/tesla-v100/
  18. https://www.microway.com/preconfiguredsystems/nvidia-dgx-1-deep-learning-system/dgx-1-with-tesla-v100-deep-learning-training-performance/

Appendix

  • USD - US Dollar
  • DL - Deep Learning
  • ML - Machine Learning
  • AI - Artificial Intelligence
  • CAGR - Compounded Annual Growth Rate
  • IoT - Internet of Things
  • CoE - Center of Excellence
  • APAC - Asia Pacific


Share this page: