Virtualization to cloud: Planning, executing a private cloud migration
A comprehensive collection of articles, videos and more, hand-picked by our editors
In many ways, managing a private cloud is no different than managing an on-premises data center. IT admins still...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
must take important steps to monitor and balance the infrastructure. But the success of a cloud environment depends on several components: security, server density, network planning and workload management.
Before placing any workload on a cloud-ready server, administrators must plan their physical server environment. During this planning phase, cloud managers can size the environment, know what workloads they are delivering and truly understand available resources.
Distributed computing allows users to log in from any device, anywhere, at any time. This means an organization’s cloud environment must be able to handle user fluctuations -- particularly for international companies, whose users log in from various time zones. Without good server load balancing, a cloud environment can experience degraded performance as cloud servers take on more workloads than they’re capable of.
Without good server load balancing, a cloud environment can experience degraded performance as cloud servers take on more workloads than they’re capable of.
Administrators must take time to evaluate which workloads are being deployed into the cloud, because each will have different effects on the cloud-based server. For example, if an environment is looking to deploy a virtual desktop environment, it must know the image size and how many users can safely reside on one physical server. Load balancing determines size and properly configures hardware at the server-level. If a server becomes overloaded, a resource lock will occur, which can degrade performance and affect the end-user experience.
Visibility into the cloud
A company with multiple cloud locations must have visibility into remote data centers to avoid complications and maintain server health. By monitoring what’s running on cloud servers and setting up alerts when issues arise, IT admins are able to take proactive measures to load-balance the entire environment.
Deploying end-point monitoring tools can help with visibility. If a server’s resources are being consumed at a dangerously high rate, an engineer needs to know so he can resolve the issue quickly. Constant visibility -- monitoring who is accessing cloud machines and how dense the user count is -- can help alleviate load balancing issues.
Having visibility into your cloud presence can help you understand how resources are being used. Results can be used to determine how to properly allocate user numbers or recognize if the environment need additional servers to support workloads.
Load-balancing tactics within a private cloud
One misconception among data center and cloud managers is that load balancing is primarily a server-based function. The reality is that admins must monitor and load balance multiple devices within a cloud environment. Server load balancing is not a difficult process -- as long as it’s done proactively.
Servers. Physical resources on a server are finite. Without proper monitoring and load balancing, an entire system can become overloaded by workloads and users. When working with data centers in the cloud, it’s important to look at the physical hosts and virtual servers running on them.
One misconception among data center and cloud managers is that load balancing is primarily a server-based function. The reality is that admins must monitor and load balance multiple devices within a cloud environment.
If a company is running a private cloud and pushing out applications using Citrix’s XenApp, for example, it must know how many apps are installed on the server and how many users it can safely support. By sizing the machine based on this information, administrators can set a cap on user count and disable additional logons once the threshold is met. Any new users will log in to a different server that has been made available for load balancing purposes.
Access gateways. If an access gateway breaks down, so will the ability to launch cloud workloads. Global Server Load Balancing (GSLB) is one feature available on Citrix’s NetScaler appliance that can help administrators create a robust and redundant environment. If one location goes down, GSLB detects the connection loss and immediately load balances to the next available appliance, allowing continuous access into an environment -- even if a device has failed.
Security devices. Each security device only accepts a certain amount of connections; having a backup device in case of failure is important. Properly sizing a security appliance will depend on the cloud environment and the number of users accessing it. The ability to authenticate users across the WAN is important to maintain uptime and environment stability.
Network infrastructure. Cloud traffic bottlenecks that occur due to a poorly designed switching infrastructure can cost a company money in degraded performance and can result in man hours spent troubleshooting and fixing the issue. Network admins should start with a good core switch and have a secondary switch available. By monitoring the amount of traffic passing through the network, admins will know if the environment is properly sized or if it needs more hardware.
Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is a Virtualization Solutions Architect at MTM Technologies, a national IT consulting firm.
Loading balancing in a hybrid cloud environment takes a little more finesse.