Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Which workloads benefit from GPU instances in the public cloud?

GPU instances help enterprises run more compute-intensive workloads on the public cloud. But what kinds of apps, specifically, are a good fit for these instance types?

As the large cloud providers duke it out for market share, they've expanded the variety of cloud instance types...

they offer. Users now, for example, can deploy instances that have access to graphic processing units, which are especially beneficial to compute-intensive workloads like artificial intelligence.

The competition between the top cloud providers over graphic processing unit (GPU) instances is only starting to heat up. Azure in December 2016 rolled out its N-Series virtual machines that include Nvidia GPUs, and, in September 2016, Amazon Web Services unveiled the new Amazon Elastic Cloud Compute P2 instance type, which also includes Nvidia GPUs. Google, for its part, revealed in February 2017 that Google Compute Engine and Cloud Machine Learning will support Nvidia GPU instances, as well.

Before you deploy one of these GPU instances, it's important to understand the types of applications and workloads they best support. These workloads include:

[Cloud-based GPU instances] will stimulate the growth of AI startups, and open up new AI opportunities for industries ranging from healthcare and biotechnology to military applications and self-driving vehicles.

Business analytics applications benefit from GPUs that offer massively parallel computing. Hadoop-like applications with data processing that can be mapped across a set of engines fits that bill. The public cloud offers a pay-as-you-go model, and the ability to handle variable workloads through cloud bursting. These capabilities are especially important, for example, in the retail industry, where users require analytics responses within a couple of minutes and where workload demands peak at different times.

Video production work also benefits from GPU instances because it involves large amounts of rendering and real-time editing.

Artificial intelligence (AI) is still in its infancy, but having ready access to GPUs provides a way to test scalability without a huge expense. This will stimulate the growth of AI startups, and open up new AI opportunities for industries ranging from healthcare and biotechnology to military applications and self-driving vehicles.

Virtual desktop infrastructure (VDI) could also get a boost from GPU instances. For instance, Google also revealed the future availability of AMD's FirePro GPUs, aimed at high-performance VDI, on Google Compute Engine.

FPGA instances

The effect of field-programmable gate array (FPGA) cloud instances is somewhat more complicated. In general, FPGA-based computing can deliver enormous performance for a single, narrowly defined use case. The technology is still young and much of the emphasis is toward accelerating specific compute tasks, such as compression and encryption. As it matures, the idea of using an FPGA-based cloud instance will follow the same adoption curve as GPUs, and organizations can use low-cost sandboxing to kick start adoption.

Super-computing has already been made available to smaller scientific organizations through high-performance computing clouds, which can dramatically accelerate projects. GPU instances will help bring super-computing to the under-graduate level, benefitting academic research.

Engineering simulation, such as that used in the oil and gas and automobile industries, will also be impacted by cloud-based GPU instances. Car manufacturers rely on engineering simulations that can be very time-consuming, but GPU instances will remove the need for field-portable processing clusters, lower analysis costs and generally accelerate projects.

Next Steps

Explore cloud instance types from Azure

Do you need to upgrade to a larger cloud instance?

Map Google cloud instance types to your workloads

Dig Deeper on Big data, machine learning and AI

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

What are some benefits of running a GPU instance in the cloud?
I am quite curious about how cloud computing is used efficiently in high performance computing ? Most of today's cloud implementations rely on virtualization technologies such as Xen or KVM. However, virtualization technologies by definition are inefficient use of CPU because of the virtualization penalty. In spite of hardware virtualization support, there is a 30 to 70% penalty for using virtualization. For this reason, virtualization was popular in Enterprise because the problem in Enterprises was under utilization of resources  as a result no one cared about inefficiency. However in case of HPC I assume situation will be different as those inefficiencies introduced by virtualization will have larger impact
The virtualization penalty is relatively small and is offset by the gain in flexibility for an HPC cloud. We no longer have one-user-takes-all or even manual segmenting of the HPC cluster for multiple tenants.
Instead automated on-demand provisioning allows HPC to be delivered cost effectively to even small users who could never justify an HPC cluster of their own. This opens HPC up for many academic or business projects, with substantial improvements in schedule.