Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Trends in cloud computing: Getting nimble in the cloud

Cloud One on One with Reza Malekzadeh—the former marketing director at VMware Inc. and now the VP of marketing at Nimbula Inc.

This article can also be found in the Premium Editorial Download: Private Cloud: Common challenges in developing a private cloud environment

Generally released in April 2011, Nimbula Director offers tools for enterprises to create private cloud systems in their own data centers as well as to service providers to build public cloud services. The technology provides customers with an Infrastructure as a Service offering that is modeled on Amazon Elastic Compute Cloud (EC2), which makes sense given that Nimbula’s founders hail from EC2.

Reza Malekzadeh—the former marketing director at VMware Inc. and now the VP of marketing at Nimbula Inc.—discussed how the technology works and where it stands out in an increasingly crowded marketplace.

What is Nimbula Director?

Think of it like Amazon EC2 behind a firewall: that is, having a private cloud solution on your own infrastructure. Within an organization, users can access a private cloud infrastructure and create self-service, self-provisioned virtual machines. In addition to using their own infrastructure, users can run workloads on external clouds.

Intuit runs its TurboTax software, on private infrastructure; that software is central to its business. But the company has specialized needs as well. Once a year, they rely on Amazon for testing; it’s a periodic need. That is the kind of use case we envision for Nimbula Director.

How does Nimbula Director fit into the marketplace?

Part of our vision is that this is not a black-and-white world. You’re going to have coexistence, where people will choose the best platform for a given app. They will keep their hardcore, monolithic IT systems that require fault tolerance on premises. Those applications will continue to run the way they run today: by IT and in-house. But as they look to deploy new apps that are scale-out architectures and more tolerant of failure, these applications are more suited to a public or a private cloud architecture. We have customers, for example, that use their infrastructure for scientific computing, which requires a lot of data crunching. When they need extra capacity for a week, they use public cloud services instead of having to buy additional hardware for that short period.

By contrast, their Oracle database or Exchange Server runs internally on traditional systems and architecture, but they use private and public cloud architectures for new Web 2.0 architectures or data-crunching applications that run during peak times during the week.

How does Nimbula Director differ from other cloud technologies?

A lot of systems are evolutions of previously existing technologies and provide layers of automation and orchestration on top of existing stacks. But with these technologies, you carry forward a lot of the decisions previously made with that architecture.

Nimbula Director was built from scratch and doesn’t have any baggage to carry forward.

If you want to add capacity, our system automatically detects your hardware and does so automatically. When you plug in physically to a new server, the server does a pixie boot. We will detect the boot and install the software.

Nimbula Director doesn’t have a single point of failure. Replication or failover mechanisms reside in all management services in a distributed control plane.

Permissions are also different to allow for better self-service. Users can give permission and access to their own content, so IT is therefore no longer in the way on that path to delegating access.

Then there’s networking. In a traditional world, IT departments have to deal with IP tables, firewalls, and that can become overwhelming with a scalable infrastructure with hundreds or thousands of machines. With Nimbula Director, applications can instead be assigned to network security groups and have security policy enforced independently of the underlying network topology.

Third, there’s pricing. We want our pricing to reflect the more flexible cloud model. So, for example, if you install the software and use only a certain number of cores, you pay for that. If you burst to more, we’re not going to penalize you.

At the end of the year, you pay for excess capacity used. If you install the software and use only half the cores, we charge you only for that, whereas competitors might charge you for 500 if you use only 250. If you use 300 on a consistent basis, you pay for the extra 50.

This was last published in August 2011

Dig Deeper on Cloud architecture design and planning

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchCRM

Close