A growing emphasis on data analytics, artificial intelligence and machine learning has lead enterprises to take...
a closer look at graphics processing units.
In response, the top cloud vendors, Amazon Web Services (AWS), Microsoft and Google, now offer public cloud instances with graphics processing unit (GPU) support. While these GPU cloud instances have a great deal of potential, it is in an early stage of development. As a result, a few hurdles exist, such as cost and, in many cases, a lot of custom development work.
A changing marketplace
Traditionally, resource-intensive systems, also known as high-performance computing (HPC), were a niche technology geared toward research, engineering and video development. Typical applications include genomics, computational finance, video rendering and visualization.
HPC systems demand oodles of processing power on specially configured servers. Development often takes years and the applications require days or longer to run on high-end, dedicated systems. As a result, these applications were difficult to build, expensive to run and out of the reach of most enterprises.
But recently, HPC market dynamics have changed. The systems have moved away from customized CPUs toward commodity gear. Vendors like Nvidia and Intel have positioned their GPUs as a good fit for the HPC market, and top public cloud vendors have relied on them to support their HPC services.
HPC on the rise
HPC server sales continue to grow. "Servers for cognitive workloads are a fast-growing segment in the server market," said Peter Rutten, research manager at IDC's Enterprise Platforms Group. The research and analysis firm projects that worldwide revenue from servers for cognitive workloads will grow at a 19.8% CAGR from 2016 to 2021, representing a $9 billion opportunity.
Also, the customer profile has changed. Historically, HPC systems were deployed in niche markets, but have worked their way into the corporate mainstream. Enterprises deal with large volumes of data that are expected to increase exponentially with the emergence of the internet of things. As firms collect and mine larger mounds of data, they are on the lookout for software to help automate processing. As a result, GPU systems -- including GPU cloud instances -- have become computational engines for artificial intelligence and machine learning applications.
Providers round out their GPU cloud instances line-up
This shift toward GPUs caught the attention of the top public cloud suppliers that have recently moved into space. In the fourth quarter of 2016, AWS and Microsoft Azure both revealed new GPU cloud instances for their public clouds -- the P2 instance type and the N Series, respectively -- and in February, Google also entered the fray with its GPU support, which is currently in beta.
While these instance types are well suited for machine learning and other compute-intensive applications, several barriers to adoption exist. First, these services are generally expensive. AWS and Microsoft Azure start their pricing at $0.90 per GPU per hour, while Google GPU cloud instances starts at $0.70 per hour. By comparison, AWS starts its pricing for general-purpose compute instances at $0.0059 per hour.
In addition, this market relies on custom development and highly specialized programming tools. Popular frameworks include Apache Spark, Caffe, Apache SystemML, MXNet, TensorFlow and Torch. Most enterprise IT departments lack the skills to write and manage applications built with these frameworks. Organizations need to invest in staff training and certification programs to manage such deployments.
HPC, machine learning and other compute-intensive applications continue to gain popularity with enterprises. And while GPU cloud instances can help support these workloads, high prices and a lack of experience may deter many enterprises from deployment in the near term.
See what workloads would benefit from GPU instances
Understand HPC options in the cloud
Track the evolution of the high-performance computing market