When building a hybrid cloud, IT teams should carefully consider their performance requirements to avoid an I/O...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
bottleneck. These requirements are use-case dependent.
At one extreme, databases pound on storage and can't get enough I/O operations per second (IOPS), while web servers can run on low I/O levels. A high-end server that only runs database instances would likely need several thousand IOPS to remove the storage bottleneck completely, while a web server with 2,000 containers would likely need an average of 1,000 IOPS.
Local instance stores can ease the load on networked I/O, but performance must increase to support the cloud's compute power and agility.
Consider flash and solid-state drive (SSD) storage to increase IOPS and decrease the likelihood of an I/O bottleneck. All-flash arrays can reach one million or more IOPS, while new storage appliances with, for example, 12 inexpensive Serial Advanced Technology Attachment SSDs can reach 500,000 IOPS. But fast networked storage puts enormous pressure on networks. Consider using 10 gigabit Ethernet (GbE) dedicated storage local area networks, which will begin migrating to 25 GbE this year.
In a hybrid cloud, fast SSD and flash technology can increase storage performance for local private clouds, but the bridge to the public cloud requires further planning, especially if you use cloud bursting. The problem is that wide area network transfers remain slow.
To further reduce the chances of an I/O bottleneck, form a data management strategy that positions data as closely as possible to where it will be used. This requires the duplication of data sets in each cloud environment. This model will work for most types of data, including application code, tool sets and operating systems, as well as computer-aided design libraries and customer history files.
For critical data, such as inventory levels, a single copy is essential for data consistency. Often these are database records, and a cloud bursting model could use sharding to distribute processing and the associated data. If IT teams plan this in advance, they can preposition a snapshot of a portion of the database. Then, for cloud bursting, that section of the database in the public cloud would sync with any changes in the current, private version. The public cloud has many instance and storage options, so use sandboxing apps to determine the best configurations.
Give your hybrid cloud performance a boost
Navigate the complexities of hybrid cloud management tools
Improve server performance with I/O virtualization
Dig Deeper on Network and application performance in the cloud
Related Q&A from Jim O'Reilly
Despite ransomware and other attacks causing security issues, it is possible to institute safe cloud backup. Access control and testing are among the...continue reading
SSDs, HDDs and NVMe can all provide local storage for an OpenStack deployment. But what are the benefits and tradeoffs of each, and how will these ...continue reading
With the release of Intel's Optane, a product based on phase change memory is finally on the market. Where PCM goes is dependent on quite a few ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.