News Stay informed about the latest enterprise technology news and product updates.

Containers could breathe new life into old apps

Containers are viewed as the simplest path for moving traditional data center apps to public clouds, but IT admins should prepare for the Donkey Kong barrels that are sure to trip them up along the way.

NEW YORK -- Containers could be the bridge that connects traditional enterprise data centers to public cloud, if...

IT professionals can make networking and other infrastructure challenges fall into place.

Containers were a major focus of a panel discussion on hybrid cloud services and split application architecture at the Open Networking User Group at New York University last week. A split application architecture promises scale-out microservices in public clouds, but connects back to the private data center for scale-up services, such as databases.

Networking remains a major hurdle for this type of hybrid IT, but some of the biggest voices in the container space said  this emerging technology built for next-generation applications could also ease the transition for those still heavily reliant on more traditional workloads that will never run properly in the public cloud.

Many traditional IT shops have thousands of applications that are virtualized, but they aren't built for dynamic, agile cloud environments that focus more on horizontal scaling, said Greg Lavender, CTO of cloud architecture and infrastructure engineering at Citigroup Inc.

"Until they build their applications to be able to handle multiple faults without taking down the whole application, they're not going to really run them well in the cloud," Lavender said. "They're going to be stuck in this traditional vertical scaling or single scale architectures."

And because we're in this in-between phase where many older applications can't horizontally scale, Docker and other containers are a possible bridge between the two, said Tim Hockin, senior software engineer at Google and technical lead for Kubernetes.

Systems like Kubernetes, Docker and Mesos go to great lengths to provide abstraction that maybe users shouldn't be doing, but will anyway. They can treat a container like a VM and eventually peel back some of the layers to be used more like microservices, Hockin said.

"They're going to do it whether we want them to or not, so let's at least let them do it in a way where that means they don't hurt themselves," Hockin said.

As the application reaches into the infrastructure, there is a need for some level of abstraction to hold it all together, said Martin Casado, a fellow at VMware Inc. The emergence of containers is fortuitous for the IT industry because it provides a certain degree of familiarity.

"They are similar enough to things we know, like virtual machines, that we can just go enter it into the environment without doing massive disruption to the organization and massive disruption to the tools," Casado said.

At Citigroup, the transition started with willingness by staff to move the Web-tier applications to containers and microservices, Lavender said. The vertical, stateful workloads will have to remain on the private cloud and extending that virtual network with a virtual private cloud could bridge the two, but it will be critical to get the container orchestration pieces to correct.

"I see containers as the bridge into the hybrid cloud," Lavender said.

Networking a major hurdle

Traditional enterprises still limit their use of public cloud, in part because of concerns about security and quality control in extending the network beyond their own virtualized environment. One of the main issues from a networking perspective is the lack of guarantees around I/O with the public cloud. Due to that variable nature, the question is if it becomes a necessary step to cloud architectures, said Dave Ward, CTO of engineering and chief architect at Cisco Systems.

"End to end, all the different pieces need to line up to get your money's worth out of bursting or having this hybrid cloud situation," Ward said.

There is no shortage of challenges in getting older applications running in the public cloud, especially when they weren't designed to operate there, said Kenneth Duda, founder, CTO and senior vice president of software engineering at Arista Networks Inc. The best bet is to start with a new and simple application without all the dependencies of your own data center.

"Walk before you try to run and then the operational experience you gain there will inform the extent that you need guarantees, [service level agreements] and things you can verify," Duda said.

In a hybrid scenario, enterprises have policies preventing everything from running in the public cloud and service level agreements have to be met. The next challenge for container scheduling tools will be getting more exposure to the information from the infrastructure to address network latency, uptime, fault domains and other issues that prevent the application from running optimally, said Tobias Knaup, co-founder and CTO of Mesosphere.

Trevor Jones is a news writer with TechTarget's data center and virtualization media group. Contact him at tjones@techtarget.com.

Next Steps

Where containers fit into the cloud equation

Is container technology right for my organization?

Deploying containers and microservices to support DevOps

Dig Deeper on Building a hybrid cloud

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What challenges did your organization face when deploying containers?
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchCRM

Close