In the first two parts of this three-part series on Hyper-V Cloud, we’ve defined Microsoft’s new cloud portfolio
and examined its core technologies. This final tip will cover how Hyper-V Cloud quantifies virtual resources.
So far, it seems that Microsoft Hyper-V Cloud isn’t really about Hyper-V. In fact, in many ways Hyper-V Cloud is more about the management of the hypervisor than the hypervisor itself. It also focuses on the hardware infrastructure upon which Hyper-V resides.
[Hyper-V Cloud] does represent a new way of thinking about virtual resources.
There’s a third component to this Hyper-V Cloud notion that merits additional attention. It’s not yet well-defined but should be oft-discussed in the months to come. It’s what I’ll call “resource quantification” or “the economics of resources,” and it’s something I discuss in detail in Chapter 3 of my book Private Clouds: Selecting the Right Hardware for a Scalable Virtual Infrastructure.
Any readers of the second tip in this series already know that a major component of Hyper-V Cloud is the elimination of the “do it yourself” approach to private cloud construction. By using pre-validated hardware configurations straight from your manufacturer, what arrives at your door is a known quantity of resources that you can apply to virtual machine (VM) workloads.
That “known quantity” is a central part of quantifying physical resources. It goes a bit like this: Let’s say you buy four blade enclosures from your favorite hardware vendor. Three enclosures are fully stocked with eight blades, with the fourth only containing two. You install these enclosures into your data center.
A private cloud desires to quantify the resources in these blades by saying, “Well, if each blade contains two 2.53 GHz processors and 32 GB of RAM, then in total I have 121,330 MHz of available processing and 768 GB of RAM.” This calculation is simple mathematics, adding together the processing and memory capacity of each blade to come up with a total quantity.
This method of determining processing and memory capacity is along the same lines as the classic economics concept of “supply.” With 768 GB of RAM, you’ve got plenty of supply to assign to VMs. The other half of the economics calculation is, of course, “demand.” In a private cloud environment, demand pressure is exerted by concurrently running VMs. You can see this demand information in Virtual Machine Manager (VMM) by looking at the MHz of processing and the GB of RAM each VM individually requires. The same holds true with storage and networking.
How will Hyper-V Cloud differ?
The potential for brilliance in Hyper-V Cloud depends on how Microsoft ultimately integrates its hardware vendor partners with the management and monitoring exposure inside VMM. With a management umbrella that can peer into every layer of the virtualization stack -- reporting to administrators on immediate supply and demand levels as well as long-term trends -- Microsoft can create what a private cloud desires to be.
But we’re not there yet. First and foremost, the current integration of Microsoft’s VMM simply isn’t designed to do that. Some of this data can be gathered with a little effort, but the core of VMM isn’t yet wrapped around this evolving private cloud mindset.
We are getting there, however, with the vendors who supply the hardware. Some of them are taking matters into their own hands by creating private cloud management platforms. Being built by the hardware vendor, such a platform automatically comes with all the necessary hardware integrations to see resources as they’re supplied and consumed. That same platform can then integrate with VMM to enact changes, such as powering on/off VMs and all the other administrative activities of the day.
Hyper-V Cloud is more about the management of the hypervisor than the hypervisor itself.
Turning resources into dollar signs
Performance and capacity management are only the beginning. This supply and demand of resources also enables business process integration when further quantified into dollars and cents. Think about it: If Microsoft, in cooperation with your hardware vendor, can quantify virtual environment resources into integer values you assign to needy VMs, then the next logical step is to assign dollar values to those integer values. All of a sudden, an additional 100 MHz of processing or 2 GB of RAM comes with a real and assignable cost.
Now, go one step further and pair those costs with templates for server provisioning. Very quickly, you can begin to see how a short-fuse, “we need another Exchange server” request can be immediately mapped to a business cost. Those same requests can be trended over the long term to help you know when you’re going to need more resources before you run out. That’s planning you can apply directly to your annual budgetary allocation.
In the end, is Hyper-V Cloud a product that you can see, feel and touch? Not so much. But it does represent a new way of thinking about virtual resources that’s an obvious evolution from the nasty white-boxing ways of our immediate past. And, if you’re one of the many shops who invested early in virtualization only to find that you’re getting less out of it than you’d originally hoped, you (and your business leaders) will be happy to know that resource quantification, valuation and supply/demand are all topics that are fundamental to evolving your simple virtualization experience to true private cloud computing.
ABOUT THE AUTHOR:
Greg Shields, Microsoft MVP, is a partner at Concentrated Technology. Get more of Greg's Jack-of-all-Trades tips and tricks at www.ConcentratedTech.com.
Dig deeper on Cloud architecture design and planning