The OpenStack cloud platform launched three years ago and some industry watchers wonder what the organization plans to do next.
OpenStack, an open source cloud computing management platform
OpenStack Foundation Chief Operating Officer Mark Collier sat down with SearchCloudComputing to answer four questions about the OpenStack roadmap and strategy.
OpenStack Foundation board member Randy Bias recently wrote about OpenStack's third anniversary, and wrote that Amazon Web Services (AWS) compatibility specifically is the way forward for OpenStack. Do you think that AWS compatibility is key to the success of OpenStack?
Mark Collier: There's not a binary 'yes' or 'no' answer to this. The reality is, there are a lot of service providers standing up OpenStack clouds right now, all over the world, and I think some of them will choose to pursue a strategy of mimicking Amazon with the Amazon [application programming interfaces] APIs, and some of them won't. Honestly, it's too soon to tell whether compatibility with Amazon is going to be absolutely critical to the success or failure of OpenStack. The market will make that determination.
But if you think about the OpenStack roadmap, it's not designed to be a carbon copy or a follow-on copy of what Amazon's doing. Obviously they've been blazing a trail, and a lot of the problems that people are wanting to solve will be similar, so I imagine you'll begin to see more and more of the same types of capabilities, but it's really not being driven with the idea of copying Amazon at this point.
Another thing that's being talked about is the idea that API compatibility among different clouds is not enough; there needs to be architectural compatibility among deployments of OpenStack. Do you agree?
Collier: I do agree that APIs sometimes get more credit than they deserve. Sometimes people oversimplify the concept of a platform and the compatibility of a platform based on just saying, 'If you've got the API, you've got compatibility.' The reality is that an application architecture depends on the behavior of the whole system it's interacting with. The API is simply the way it talks to that system. But it's going to expect certain behavior, and this is actually fundamental to the reason why we believe in OpenStack as a common platform for private and public clouds, because you're actually running the exact same software ... this is one of the reasons why it's not trivial or necessarily desirable to clone Amazon, because it's a black box.
We don't know what software actually runs Amazon, so it's going to be difficult to create an exact copy. You can copy the APIs, but the underlying software is going to be different software written by different people, and so I think a common set of software running in public and private clouds, being OpenStack, really increases the opportunities for interoperability with a common API along with a common deployment pattern underneath. A lot of this is stuff that's still kind of a work in progress as the market evolves and more people stand up OpenStack clouds, but I do agree with the idea that API alone is not enough to give the level of compatibility that a developer is looking for when designing an application. It's required, but it's not enough.
Does that mean people looking to move between different OpenStack-based clouds would have to work together on the underlying architecture, or is there anything OpenStack could do on its own to mitigate that issue?
Collier: I think that the best thing we can do is publish reference architectures.
What we find is that there are very common patterns in the way that people deploy OpenStack … to the extent that people are able to share their work; I think the knowledge about how to design and properly implement a large-scale OpenStack cloud is as important as the code itself, and one of the things that's starting to happen now is that the Heat project is now a part of OpenStack, and Heat is a language that allows you to describe how a deployment works. So it's really designed to sit above OpenStack when you are actually putting applications on it, but it can still be used to describe the OpenStack environment itself. There's a lot of work going on right now by our infrastructure team to actually describe OpenStack in terms of this language and particular deployment type. Those kinds of blueprints, if you will, being published and shared, will help companies make smart decisions when they deploy their OpenStack clouds in a way that will maximize interoperability.
Within the standard, what steps is OpenStack taking to ensure interoperability? Will it take steps to decertify technologies that don't pass tests?
Collier: It's something that we're starting to put more effort into now that we're getting a much bigger footprint all over the world with different OpenStack clouds.
As the user base is growing, one of the things that people are looking for is that interoperability. There are some interoperability tests that are in development this year that really leverage a lot of the work … that goes into testing every commit that comes into the software. We spin up a huge number of clouds within clouds, basically using Rackspace and HP infrastructure, and that test suite really helps make sure that every single day when new code comes in, it doesn't break anything.
And so we're looking for ways we can use that codebase and those tests to essentially validate end products downstream from the code … [and]we're still several months away from having that generally available, but it is something that we've actually incorporated from the beginning into all the trademark agreements for commercial companies that want to utilize OpenStack for their commercial product.
As we develop these tests, then companies need to pass those in order to use the OpenStack trademarks in a commercial context. We've laid the groundwork for it, and we don't have a timeframe right now for the actual test suite to be rolled out. Expect there to be a lot of discussion between now and our next Summit in Hong Kong in November.