jules - Fotolia
Storage is one of the main drivers to migrate to the cloud, and major providers, such as Amazon Web Services, Google Cloud Platform and Microsoft Azure, continue to compete to be number one -- but no longer just on price.
In today's public cloud storage market, a combination of demand and a desire to differentiate creates a veritable arms race among providers. New databases, big data storage tools and faster solid-state drives (SSDs) continue to surface, leading to more options for IT teams.
In the beginning, cost was king
There are numerous factors that led to the low cost of public cloud storage services.
Most traditional cloud storage systems were built in 1 TB Serial Advanced Technology Attachment (SATA) drives, and from about 2008 to 2011, the prices of those drives plummeted, especially in the quantities purchased by large cloud service providers (CSPs). Then, along came 2, 4, 6, 8 and 10 TB drives, each further eroding the base cost of storage capacity for CSPs.
Meanwhile, as their older and more expensive storage gear reached its sell-by date, providers started to use that gear for archival services -- for which there was major demand in the public cloud storage market. For users, this meant CSPs now offered bulk storage for cold data types. And, because these services require such little maintenance, they're roughly five times cheaper than top-tier cloud storage services.
The future of the public cloud storage market
Can the arms race continue at the same pace? Yes. Enterprises can expect further price erosion in the cloud storage market but also new software features and performance opportunities.
There is overlap between the CSPs in this race, but differentiation has occurred. For example, while Amazon Web Services (AWS) is the leader in terms of the number of services, Google has made big strides in artificial intelligence, machine learning and big data.
Public cloud storage, in general, continues to evolve at a fast pace. Local instance stores now provide fast direct-attached storage (DAS)-like capabilities to a compute instance in the cloud. Admins deploy this kind of storage with the compute instance, and it is persistent only as long as the instance is active.
Instance stores were initially disk drives, but within a few short months, SATA SSDs replaced some of them. This reduces latency and increases I/O rates, which means faster jobs for users. These faster SSD instance stores are particularly beneficial for GPU-based and in-memory operations.
Admins, however, need to understand the impact of local SSD instance storage to optimize cost. Evaluate your options with forensic performance tools that can metricate I/O.
In the next two years, SSDs will supersede disk drives as the primary form of public cloud storage. Currently, we have 32 TB, and 100 TB is likely to appear by the end of 2018, driven mostly by 3D NAND technology. This further reduces the space, power and cost requirements of storage.
Even more interesting is the idea of non-volatile memory express (NVMe) over Ethernet, a shareable storage that approaches the fastest DAS SSDs in performance. This approach could merge instance and cloud network storage, which would enable instance storage to persist after the instance closes and eliminate the need to write to network storage each time a write to instance storage occurs.
New gateways replace appliance controllers with NVMe over Ethernet, and this cuts unit costs considerably. Huawei now has a drive with a native NVMe over Fabrics interface, which connects directly to a switch. This technology will likely play a role in the public cloud storage market, as it removes a performance bottleneck and cuts costs.
A guide to public cloud storage management
What factors affect cloud storage costs?
Simplify storage with Azure Managed Disks