BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
As enterprises deploy more storage resources to the public cloud, the performance of each provider's service can have a profound effect on related workloads. Issues like storage service levels, network connectivity and application design can all affect application performance. Workloads depend on storage, so it's important to achieve and maintain the necessary levels of storage performance over time.
Use these five tactics to optimize the performance of your public cloud storage service.
Choose the storage type with care
Traditional enterprises have complete control over IT resources and their performance. But public cloud computing doesn't work this way. A cloud storage service provider won't change its offerings to create something unique for your business -- that defeats the speed and scale that makes public cloud so versatile.
Users instead have to select from a limited menu of storage services, each with its own advantages and constraints. One of the best ways to optimize the performance of a public cloud storage service is to understand those constraints and make your choice carefully, based on performance requirements.
For example, Amazon Web Services (AWS) users typically choose Amazon Simple Storage Service (S3) Standard for low latency and high throughput of frequently accessed data, though performance is variable. The challenge is to select a storage service that provides a level of performance and resilience that is most appropriate for your workload, at the lowest possible cost. If you already chose a service and find it inadequate, consider shifting data to another service tier, a different storage service or even a different public cloud provider.
Monitor and measure meaningful metrics
Users need to know when a public cloud storage service performs the way it should, when performance falters and when the service is disrupted. Measure relevant metrics to gauge availability and performance. Consider a native monitoring service from a cloud provider, such as Amazon CloudWatch, Azure Monitor and Google Cloud Platform (GCP) Stackdriver Monitoring.
This kind of service monitoring and measurement simplifies troubleshooting and facilitates improvements to workload architectures and designs. For example, monitoring reports can help an enterprise identify bottlenecks in network or storage performance. Insights from monitoring tools could also lead to service configuration changes, such as more storage capacity or the integration of another storage service.
Review and redesign the workload
Many other organizations will use the same storage services that you use, which results in unexpected performance variations. Users cannot change a provider's public cloud storage service to address this, but they can potentially change the architecture and design of their workload to optimize performance.
For example, if you move or deploy a workload in one public cloud region, while the storage resources for that workload remain in a different region, performance can suffer. To address this issue, architects can replicate the original storage repository to a duplicate storage resource in the new region and redirect the workload to use the replicated storage. Architects can also implement caching. For example, sensitive database workloads could benefit from a service such as Amazon ElastiCache or Azure Redis Cache to provide high-performance, in-memory cloud caching.
Finally, developers should evaluate the storage sensitivity of applications and consider design changes. For example, asynchronous communication can be more forgiving of latency and disruption than synchronous communication -- though asynchronous operation poses a greater risk of data loss. Ultimately, a workload that relies on public cloud storage must adapt to the behaviors of that storage.
Evaluate hybrid storage opportunities
When local workloads cannot overcome the performance limitations of a public cloud storage service, implement specialized tools to accelerate the connection between your data center and the cloud.
One example of such a hybrid implementation is the AWS Storage Gateway, which organizations typically deploy as an appliance in their own data centers. The gateway operates in three primary modes: file, volume and tape. As a file gateway, local workloads deliver file objects to Amazon S3. Organizations primarily use this mode for backup and disaster recovery tasks. As a volume gateway, local workloads can access iSCSI volumes in the cloud. Volume mode -- which organizations commonly use for snapshots and other backups -- also enables local caching, so frequently accessed data can remain in local storage, while other data is stored in the cloud. As a tape gateway, users can extend an existing tape-based backup system to the cloud as a virtual tape library.
Enhance your connectivity
Performance problems aren't necessarily rooted in the cloud storage service provider or the service itself, but can be precipitated by limitations in internet connectivity. Public internet carries risks of unexpected congestion and disruption -- both of which can interrupt storage traffic and impair performance.
One option is to increase WAN bandwidth to the public internet. To accomplish this, replace existing WAN links with a high-bandwidth WAN link, such as a 10 Gigabit Ethernet (GbE) or faster. As an alternative, combine multiple, lower-bandwidth WAN links, such as two or more 1 GbE links. Multiple links can also enhance network resilience -- if one link fails, another can maintain connectivity.
Organizations can also use dedicated network connectivity services between their data center and the public cloud storage service. Examples of these services include AWS Direct Connect, Azure ExpressRoute and Google Cloud Interconnect. A dedicated, high-performance connection can eliminate the variable performance of the public internet and improve the use of limited WAN bandwidth.
Review Google cloud storage services and options
High-performance cloud storage is now a reality
Find out which Azure storage types fit your workload