kosmin - Fotolia
As we cram more virtualized machines and apps into a single physical server, prioritizing their access to storage resources is crucial to ensuring each performs up to expectations.
In traditional IT, applications are tied to a physical server that accesses direct-attached storage. The advent of shared storage via storage area networks upended that one-to-one relationship, requiring assignment of a logical unit number to each physical server to ensure apps accessed the correct data on the shared device.
Today, multiple virtualized applications running on a physical server complicate matters further. In this podcast, Dan Florea, director of product management at Tintri, a Mountain View, Calif. maker of virtual machine storage for virtualized applications, discusses the gap between apps and storage and explains the ramifications.
"The gap that we see is the fundamental disconnect between applications today, which are virtualized, and conventional storage, which, as designed, is completely blind to the fact that applications have become virtualized and are no longer running as a single entity on a physical server." In other words, virtualized applications need to be mated with virtualized storage.
Instead of a physical server running only one app, high physical server density of up to 20 virtual machines, each running one application, is not unusual, Florea said. The problem is that the apps running on those VMs compete simultaneously for control of the physical server's I/O pipeline for data.
Dan Floreadirector of product management, Tintri
"You might have one app that is doing a lot of reads and writes to storage while another application that is very latency (delay) sensitive waits its turn," Florea said. If that latency-sensitive application is doing voice or video, even the tiniest of delays will likely be unacceptable.
Prioritizing I/O among applications to minimize latency necessitates assigning quality of service (QoS). Doing that correctly requires a deep understanding of each application's I/O behavior, Florea said.
"Once you have that information, you can do all sorts of intelligent things to apply QoS policies, to apply fairness."
Avoid the I/O 'blender effect'
Fail to properly map VMs and storage to each other and it's a good bet the dreaded I/O blender effect will occur, in which simultaneous I/O requests from applications on multiple VMs are processed in a random and unpredictable way, leading to performance degradation.
"The I/O blender effect happens when you have multiple applications contending for the same storage," Florea said. "It's also known as the 'noisy neighbor effect' where you have one neighboring application that is being chatty in terms of I/O and taking up all of the storage resources."
A typical example might be a database application and a virtual desktop application running in parallel. The database app will likely dominate storage I/O even though the desktop app might be highly latency sensitive.
"You could be in a situation where your database is completely clogging up your storage system [resulting in] users of the virtual desktop app getting a terrible experience," Florea said.
In the remainder of the podcast, Florea discusses Tintri's approach to virtualized storage and the positive impact of flash and solid-state storage technology compared to traditional spinning disks.
How can app testing be helped with virtualized networks?
It's time to couple virtualization and storage
What are the options for virtualized storage?