carloscastilla - Fotolia


Building a fluid cloud architecture

Fluid cloud promises to enable greater cloud agility and a variety of use cases in enterprise applications and telecom services.

Cloud applications today are a direct extension of modern data center architectures. A new style of architecture built on tiny virtual machines (VMs) designed for mobility will enable smaller applications with lower latency. Researchers envision provisioning these tiny VMs for services like print servers, Dynamic Host Configuration Protocol, VoIP service and other use cases. Fluid cloud architecture requires new thinking about shrinking VMs, just-in-time provisioning and the movement of VMs across different physical servers.

Felipe Manco, senior researcher at NEC described early work on the super-fluid cloud at the USENIX HotCloud workshop in Santa Clara, Calif. "Traditional cloud infrastructure is restricted to data centers and core networks. Over the last several years, operators have been deploying processing power in different parts of the network. The basic idea is to make it possible to deploy cloud services on top of smaller servers staged closer to customer premises or inside enterprises," said Manco.

Requirements for a fluid cloud

This new architecture has four essential properties: location independence, time independence, scale independence and hardware independence. Location independence implies the ability to deploy servers across various networking designs based on application needs.

Developing an architecture for quickly moving VMs around is important to provide seamless experiences for many classes of service.
Irfan AhmadCTO, Cloud Physics

Mike Schlansker, a principal architect in HP Labs' Internet and Computing Platforms Research Center, said this new architecture gives application developers, telcos and customers the opportunity to decide where to place applications. For example, staging an application near the customer could reduce the latency required to make a trip to a data center half-way across the country. This would make it feasible to provision services that require lower latency like VoIP calling.

Time independence requires the ability to spin up virtual machines for short periods ranging from a few milliseconds to a few hours. Scale independence requires building the management tools to turn up hundreds or thousands of VMs on a single server. Hardware independence necessitates the ability to run cloud applications on smaller servers at the edge of the network.

Consider new cloud applications

Manco believes that this new style of architecture would be ideal for making a variety of existing applications more efficient and allowing new applications that are not practical today. Virtual CDNs could be provisioned on-demand and cash nodes for these could be populated and de-provisioned dynamically.

These micro cloud services would also be suitable for providing back-end services for customer premise equipment in order to improve maintenance, reduce costs and rollout new service faster. The use of micro cloud services running on small VMs could help to isolate the applications from each other to address security and performance needs. Another use case is for provisioning personalized edges services, such as parental controls for TV services, cloud storage, notification aggregation and ad removal.

Shrink the VM with unikernels

A key requirement for this new style of architecture is that each virtual machine should have a very small footprint in terms of memory and processing requirements. The services should also be tailor-made for the platform to allow the provisioning of thousands of servers on-demand and de-provisioning them as required.

Manco said a best practice for this style of architecture is the use of unikernel VMs , which have much lower overhead than traditional VMs. They have created an initial platform based on the Xen virtual machine infrastructure. In the future, it may be possible to apply similar principles to containers. But Manco believes that unikernels promise better isolation and security that existing containers.

Considerable work is required to improve the performance characteristics of Xen. It was not designed to support thousands of VMs running for short periods. NEC created a new implementation of the Xen tool stack that strips functionality to improve performance. Other work was done to improve the Xen console to the provisioning of new VMs more dynamically.

Pushing the limits of small VMs

Using this modified Xen-kernel NEC demonstrated the ability to spin up 10,000 guest VMs on-demand. One of the challenges with spinning up a large number of VMs is that the incremental time to add new VMs can grow quickly. The team found that the incremental time for adding the ten-thousandth VM on the new architecture compared to LXC containers was 135 ms compared to 3.5 seconds.

Manco admitted that it's probably not practical to maintain such a large number of virtual machines at one time. For starters, scheduling issues in Xen, can slow down the performance of each VM. The problem is that the traditional Xen scheduler uses a round-robin method to collect input from each VM. A good practice is to spin up VMs as required rather than keep a larger number running in memory.

VM placement is important

Another good practice is to look at the trade-offs between scaling up the number of small VMs on a single server compared to scaling them across multiple servers. If the applications are doing considerable processing, like a print server, they need to be scaled across multiple servers. On the other hand, applications doing simpler tasks, like DNS lookup could be scaled up on a single smaller server.

Irfan Ahmad, CTO of Cloud Physics, a cloud analytics service, said, "Developing an architecture for quickly moving VMs around is important to provide seamless experiences for many classes of service." When he was at VMware, they ran into challenges with VMs when VoIP phone calls were interrupted by VM movement.

Manco said that more work needs to be done to make practical use of this more agile cloud infrastructure. In the long run, it promises to make it possible to reduce the cost of networking equipment, cable TV boxes and enterprise gear. It could also bring the same agility to Internet of Things applications by moving more functionality and control to edge servers. By doing research on this area today, enterprises will be in a better position to take advantage of building more fluid cloud apps tomorrow.

Next Steps

Learn how to manage hosting VMs

How to provision VMs safely

Dig Deeper on Cloud architecture design and planning