Cloud managers work within a distributed WAN computing infrastructure; one of the biggest shifts from the traditional...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
data center is that all data is stored, managed and administered in a private cloud. Effective cloud-based workload monitoring can capture performance issues before they happen. Knowing how your cloud is behaving allows you to deliver a more powerful cloud computing experience.
Gathering cloud performance metrics
IT admins must actively gather and log cloud-facing server performance metrics and data, especially since most servers that host cloud workloads are virtual machines (VMs) that require dedicated resources. Over-allocating or under-allocating resources to cloud servers can be a costly mistake.
Over-allocating or under-allocating resources to cloud servers can be a costly mistake.
Proper planning and workload management is necessary prior to any major cloud rollout. When gathering performance metrics about specific servers running dedicated workloads, admins must evaluate the following details:
- CPU usage: The cloud-facing server could be physical or virtual. Administrators must look at that machine and determine how users are accessing CPU resources. With numerous users launching desktops or applications from the cloud, careful consideration must be made as to how many dedicated cores this server requires.
- RAM requirements: Cloud-based workloads can be RAM-intensive. Monitoring a workload on a specific server allows you to gauge how much RAM to allocate. The key is to plan for fluctuations without over-allocating resources; you can do this through workload monitoring. By looking at RAM use over a period of time, administrators can determine when usage spikes will occur as well as appropriate RAM levels.
- Storage needs: Sizing considerations are important when working with a cloud workload. User settings and workload location all require space. I/O should also be examined. For example, a boot storm or massive spike in use can cripple a SAN that’s unprepared for such an event. By monitoring I/O and controller metrics, administrators can determine performance levels specific to storage systems. You can use solid-state disks (SSDs) or onboard flash cache to help prevent I/O spikes.
- Network design: Networking and its architecture play a very important role in a cloud infrastructure and its workload. Monitoring network throughput within the data center as well as in the cloud will help determine specific speed requirements. Uplinks from servers into the SAN through a fabric switch that provides 10 GbE connectivity can help reduce bottlenecks and help improve cloud workload performance.
Performance monitoring tools are also useful. Citrix Systems Inc.’s EdgeSight for Endpoints gathers performance metrics at the server and the end-point level. By understanding how the cloud server is operating and knowing end-user requirements, administrators can size physical infrastructure properly to support virtual instances.
Advantages of workflow automation
Active cloud workload monitoring goes beyond gathering metrics and statistics. Many systems monitor workloads and provide workflow automation in the event of a usage spike.
Active cloud workload monitoring goes beyond gathering metrics and statistics.
Certain markets, like the travel industry, experience usage spikes during particular periods of the year. To prepare for this, workload thresholds are set so new VMs can be spun up as soon as demand increases. Therefore, end users will always have access to data and normal workloads without degradation in performance.
Workflow automation also helps with disaster recovery and backup. As data replication occurs between numerous sites, a remote location can spin up identical workloads if another site experiences data loss. Proper workload monitoring and data center design can help increase system stability and, more importantly, business continuity.
Cloud monitoring tips
Here are a few rules to help maintain the health of your private cloud workloads:
Know your physical resources. Even though physical resources may seem endless initially, they have specific limits. Without properly monitoring and gauging these resources, they can be depleted very quickly. Cloud workloads can be demanding. Planning is a must.
Keep active logs. In addition to actively monitoring a cloud workload, cloud managers should log how this workload or server is performing over a period of time. Cloud servers can be upgraded and workloads can be migrated from one physical host to another. In these situations, knowing how well specific server sets operate compared to older server sets can help to calculate total cost of ownership and return on investment. In many situations, good performance logs can supply the statistical information needed to justify an increase in data center budgets.
Monitor end points. From the data center’s perspective, engineers are able to monitor and manage active workloads. It’s also very important to monitor workload activities at the end point. By knowing how the workload is being delivered and how well it is being received, IT teams can create a more positive computing experience.
As a user accesses a workload in the cloud, admins have insight into which type of connection they’re using, how well data is traveling to the end point and if any modifications should be made. In some instances, admins may want to apply data compression or bandwidth optimization techniques to enable the workload to function properly at the end point.
Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is a Virtualization Solutions Architect at MTM Technologies, a national IT consulting firm.
Learn to manage cloud workloads from the data center with this guide