mainframes, written in COBOL, onto modern server hardware. Now that cloud computing has come of age, he says, the transition of these old school applications that underpin many of the world's financial systems into the cloud more than makes sense: It's a natural fit.
So why COBOL in the cloud?
Mark Haynie: What's interesting about the way that mainframes and the applications built upon them have been done for the last 20-something years is that they were designed to be very stateless, restful-state type implementations, but the reasons were different. Back in 1980, the reason why your mainframe applications didn't retain state, and abuse the data stream protocol between the client device and mainframe to convey that state back and forth, was because the biggest mainframe was 6 MIPS in terms of speed and also limited to 6 megabytes of RAM. And that supported 800 connected users.
The cloud architecture exactly mirrors the expectations of mainframe programming?
Haynie: Essentially, these applications have always been built in a multi-tenant, multi-user environment. In fact, they've often been provisioned in such a way because mainframes have come in a variety of sizes over the years. People have had to dice their applications up into KIX regions and then cross communication between those regions. That kind of technology can be impressed upon, say, Amazon's EC2 and S3 environments, so [e.g] VSAM files would map to blobs in S3 buckets.
COBOL is hardly state-of-the-art. Why won't customers focus on modernization?
Haynie: There's over 240 billion lines of COBOL code out there and that install base is growing by five billion lines a year. Just the additional five billion lines of COBOL code surpasses by many-fold the number of lines of Ruby code that's being written, or Python for that matter.
And you're saying why bother when you've already got something stateless, distributed and already designed and working?
Haynie: We've had for years the ability to translate [mainframe applications] into a SOAP and WSDL XML-based Web service, and we're clearly using that same technology in the cloud. This is possible, of course, because these applications that were built 10 or 20 years ago were built with all these ideas in mind. Like you said, it's an "a-ha" moment, that all of these things will fit right on top of cloud computing infrastructures.
For instance, MQSeries was the first message queuing mechanism provided by IBM to move a message between the CICS regions. Essentially, when running on Amazon, we map them to the Simple Queuing Server (SQS). Now, SQS is not reliable, so we had to add in some transaction capabilities to make it reliable.
How do you convince someone running a COBOL/mainframe environment, who looks at reliability with a completely different eye than someone who's running a Web server, that Amazon is a good idea?
Haynie: We're cloud agnostic, so technically, you could use our enterprise cloud services layered on top of Azure and our enterprise cloud services layered on top of EC2 to effectively be disaster recovery for each other. You could host two copies of your application, one a production, one a disaster recovery, mirrored in two different cloud infrastructure services. That's why the whole "cloud agnostic" part of the story is important. It gives customers a fast track to cloud computing, and [they] can still get all the benefits. You turn your servers off for the night or weekends, and what's amazing is, people say, "Gee, that's like time-sharing in the 60's, when I used to pay for CPU seconds used, cards punched, pages printed?" And I say, "Yeah." It is exactly the same thing. That model has come back in the form of pay-per-drink cloud computing paradigms.
What does this portend for the future of old-school business applications in the cloud?
Haynie: It just gives customers more flexibility. I don't expect cloud computing to supplant on-premise data centers. But it makes sense to have, if you will, a hybrid type of model, where you are running some applications and transactions locally and maybe you're using the cloud for excess capacity or disaster recovery. After familiarity [sets in] with that cloud environment, you might flip those two, because it works the same way as on-premise. Then you might decide to turn off that on-premise data center because you're using cloud services on two different Infrastructure as a Service (IaaS) providers, and you're using one to fail over to the other. So we don't know which is going to work, but what we're saying is: These applications are working, they're producing revenue, they're valuable to your IT organization. Just put them out there like you're running them locally and see what works.
|MARK HAYNIE'S BIO:|
Mark Haynie is the CTO of application modernization at Micro Focus where he concentrates on core IT application and data reuse in today's computing world. He has been a focal point in defining Micro Focus products and services to help IT organizations re-host IBM mainframe applications on Windows or Linux environments and the performance and availability requirements those systems must meet. He has architected products to meet customer demand to extend applications to the Web and Web service environments common in a Service Oriented Architecture..
He joined Micro Focus in 1999, has over 25 years experience in IT and has conducted seminars and published papers and books on design automation, database design and high performance transaction systems.