Capacity planning: From buckets to rivers
When dealing with a constrained set of resources found in the traditional, in-house data center, capacity management can be metaphorically viewed as making sure you have a big enough bucket to hold all of the water the business needs to slake its thirst. There are boundaries on this pool of resources because the IT department must own each computational resource: hardware, software, and the people used to deliver and manage it. Capacity management under this "bucket" approach tends to center on predicting peak usage needs and ensuring that in-house data centers can meet those demands.
Years ago, grid or utility computing promised to offer an alternative to this physical limit; instead of a bucket, IT resources would be like a fast flowing river that limitless cycles could be drawn from. For various reasons -- such as low bandwidth to remote grids, the high cost of entry, and the batch-job nature of most grid offers -- grid and utility
Slipping in under the moniker "cloud computing," some of the aspirational benefits of utility computing are once again making the rounds in IT departments. The same challenges that muddled utility computing efforts must be addressed. Hopefully better networks and a bias for application-based instead of job-based computing will cause greater uptake.
At first, cloud computing hopes to run your baseline of IT services "in the cloud" with the goal of paying less overall than if you owned your own data center. After this, however, the aspiration of cloud computing is to provide IT departments with unlimited computing power on demand. How will IT departments properly manage this river of IT?
The answer lies in finding where the largest constraint is in this new model of IT delivery. Rather than upper limits on raw computing power, the constraint in cloud computing is the speed at which new services can be provisioned and put into production. Scaling up an IT system into the cloud will require time to initiate new systems, transfer data and applications, connect to existing services, test the combined system, and manage the full life-cycle of the larger pool of IT resources.
Here, the two areas of concern for capacity management are the following:
- Ensuring that new IT services can be spun up at an appropriate speed
- Ensuring that the proper workflow is put into place to manage those assets
Workflow management matters because much of the benefits of cloud computing comes from the speed and ease with which IT resources can be created and put into production. A heavy workflow focused on more traditional waterfall approaches to planning everything ahead of time down to the smallest detail will slow down the use of cloud computing. Worse a process-heavy workflow can delay the delivery of IT services to the business, delivering solutions to yesterday's problems today.
Grounding cloud computing
While the underlying constraints for capacity management change when using cloud computing, the traditional cycles of modeling, provisioning, monitoring, maintaining and modifying remain. Evaluated performance, cost and the ability of the business to profit will remain. However, more emphasis is placed on the last two steps of the cycle: maintaining the use of the cloud-based resources and modifying their use over time.
The so called "elastic" nature of cloud computing implies that cloud-based IT assets will be deprovisioned much more frequently than traditional assets. The idea of deprovisioning, or getting rid of, an IT asset seems ludicrous to most IT departments. IT assets never die, they just blink alone in dark corners. The economic benefits of cloud computing relies on IT knowing when to stop using cloud-based IT assets. Using a cloud-based asset 24/7 often costs much more than using a similar on-premise asset. The mindset of IT departments must shift from owning and running as much raw computational power as possible to owning and running as little as possible.
The responsibility here rests not only with IT departments, but also with the developers of those IT services who must create applications whose capacity can be managed in an elastic fashion. Changing both IT departments and vendors often seems impossible. But the end result is incredibly desirable, if not required day-to-day: lower costs and the ability to more quickly give the business what it needs to make money.
ABOUT THE AUTHOR: Michael Coté is analyst at RedMonk, covering primarily enterprise software, specializing in open source, IT management, software development, the Web, and social/collaborative software. He is RedMonk's IT Management Lead. His blog is available at PeopleOverProcess.com, and he produces the RedMonk podcast and the video podcast, RedMonkTV. This was first published in April 2009
This was first published in April 2009