Despite some of the market hype, high-performance computing is nothing new. My first job out of college was operating...
the Cray X-MP that worked for a timesharing service. Later, I had some additional HPC experiences with supercomputers when I worked at NASA.
However, the applications for high-performance computing (HPC) are still limited. It's overkill for enterprises to insist on using HPC services for anything but the most performance-sensitive workloads. But as big data and Internet of Things (IoT) applications proliferate, there are more workloads that might be right for HPC -- and HPC in the cloud, especially, will play an increasing role in the enterprise.
For starters, cloud computing makes HPC more affordable. Aside from high-end engineering firms and the government, most enterprises can't justify the cost of a supercomputer. However, HPC services hosted in the public cloud have dramatically reduced the cost of supercomputing. Meanwhile, public cloud providers are starting to offer more HPC services.
Before diving into the world of HPC in the cloud, it's important to understand your options, and which workloads will justify your costs.
Exploring common HPC applications
"HPC allows users to solve complex science, engineering and business problems using applications that require high bandwidth, enhanced networking and very high compute capabilities," according to Amazon Web Services (AWS). These types of applications typically have the following characteristics:
- I/O intensive: Data-centered applications, such as big data and IoT apps, need to read and write data at high speeds.
- The ability to perform dynamic, compute-intensive operations, such as video rendering.
The rise of big data and IoT systems has made HPC much more useful. And the need to calculate large amounts of data, especially to support operational systems, means that speed is critical. HPC systems' ability to perform calculations in just a few seconds allows them to align directly with business processes in near real time.
Exploring options for HPC in the cloud
Different cloud providers bring different capabilities to the HPC arena.
AWS offers HPC services that provide you with "access to a full-bisection, high-bandwidth network for tightly-coupled, I/O-intensive workloads," according to the vendor. This means you can scale out across thousands of cores, allowing you to not only use HPC, but to scale your HPC applications. To do this, you can launch AWS C4 instances, which are the latest version of Amazon EC2 compute-optimized instances. In addition to HPC applications, C4 instances are designed for massively multiplayer online gaming, media processing and transcoding, AWS said.
In December 2015, Google announced "a breakthrough that could prove its quantum computer is actually using quantum mechanics," according to Popular Science. "When researchers gave the D-Wave 2X a carefully crafted test problem, the 1,000-qubit computer solved it 100,000,000 times faster than a 'classical computer' could."
While that seems very fast, it could be too fast -- and too expensive -- for most enterprise IT budgets. Much like a Formula One racer, it's impressive -- but who drives those cars to work?
D-Wave 2X is not a cloud service yet, but it's likely that Google will bake it into its public cloud. Just last week at its GCP Next conference, Google committed to support machine learning and other applications that are proper fits for HPC. However, D-Wave 2X will be technology overkill for 99.9% of business applications.
HPC is here to stay. However, what we consider HPC today could become just "computing" within a few years. The bar is being raised in terms of what our new applications expect from our infrastructure. As the need for faster and more valued responses increases, so will the need for HPC in the cloud.
AWS rolls out new IoT cloud service
Developing a cloud strategy for IoT devices
Five tips for managing big data in the cloud