News Stay informed about the latest enterprise technology news and product updates.

Oracle chief technologist outlines vision of SOA grid

Service-oriented architecture (SOA) and extreme transaction processing (XTP) make up Oracle Corp.'s vision of the SOA Grid, explains David Chappell, vice president and chief technologist for SOA at Oracle, in the second part of this interview. Of the SOA grid, he says, "It's based on an architecture that combines horizontally scaleable database independent middle tier data caching with intelligent parallelization and affinity of business logic with cached data. What this enables is more efficient models for highly scalable SOA-based applications that can take full advantage of event-driven architectures." It also changes message-oriented middleware (MOM) into something Chappell dubs "not your MOM's enterprise service bus [ESB]." Chappell, with more than 20 years of experience in the software industry, has written and lectured on SOA, and ESB technology.

Read Part 1

David Chappell
David Chappell
How does XTP fit into the SOA grid concept?
Putting all this together is at the heart of the SOA grid concept. The grid concept is the combination of elements of the Oracle SOA Suite and the Oracle Coherence product. It's a new approach to thinking about SOA infrastructure, which provides state-aware continuous availability for your service infrastructure, your application data and your processing logic. It's based on an architecture that combines horizontally scaleable database independent middle tier data caching with intelligent parallelization and affinity of business logic with cached data. What this enables is more efficient models for highly scalable SOA-based applications that can take full advantage of event-driven architectures. You mentioned the financial services applications for XTP earlier. Are you looking at other industries besides financial services?
Oh yes. We already have traction in other industries, including the travel industry and also healthcare. One example is a large insurance company has implemented all of the state caching for its customer facing portal, which the customer use to update their profiles, fill out insurance claims, and insurance applications. All the page-flow data, sort of the state of what that customer is within the insurance processing systems is all cached by the Coherence product. When they moved to that new architecture, they saw, as I recall it, a 40x improvement in update speed and throughput, and a 400x improvement on read throughput. That was simply because they were moving to caching the data with near in-memory speeds. It still uses a real database on the backend but the real database is updated using asynchronous queues in a sort of write-behind lazy update fashion. So it's combining in-memory access speeds with highly available middle-tier caching that's made fault tolerant through redundancy with lazy write-behinds to the relational database for storage and reliability. Besides the decrease in latency, they also get added reliability.

There's a bit of hallway folklore that they lost their backend database, which wasn't on Oracle, by the way. Their backend database went down over the weekend, but nobody noticed because the Web application continued to operate. With all this happing in memory, what about auditing?
There's some advanced patterns being implemented by customers where the database is asynchronously updated in order to provide audit data, long-term storage, querying, reporting, etc. But the real-time interactions between they user-facing apps and the middle-tier data caching happens independently of that. How is that done?
There's some lower level patterns that are enabled by the Coherence product where it's not just a cache, it really can become a distributed state machine for managing state data for an application, even to the extent where you can set up notifications based on an observe pattern. So you can install listeners on any piece of state data that's cached in the data grid. So whenever something gets updated any number of applications can be immediately notified, kind of like the pub/sub model that's based on state changes of the data that's stored in the middle tier. That is how the asynchronous write-behind queues for the database is implemented. Think of it as having a write-trigger on a piece of data that gets put into the middle tier data cache, then firing off to as many interested parties that need to know about it. That in itself enables new and exciting models. What kind of models?
There's a concept I've been talking about called "not your MOM's [message-oriented middleware] bus. It's really about where if the data is reliably stored in the grid, and all services that are plugged into the service bus can access it on an as need basis, and then be notified when the state data changes. Then why use a traditional MOM to put data into a pipe and send it across the bus simply to take it out again on the other side when you can just access the data grid directly? Do you have any examples with metrics of how this SOA grid technology using XTP is makes a difference in business processing?
An extreme case is a large bank that used this technology to re-architect and rewrite an existing risk management calculation. For regulatory compliance they have to keep enough cash in the bank to cover their outstanding risk at any given time. They have to prove whatever that outstanding risk is on a regular basis, at least daily. In this "what if" case that they told me about, their previous grid technology with parallelization and compute grid took 17 hours to run a particular risk calculation. When they rewrote it using Coherence, which is more of a data grid with compute grid capabilities, they got that same risk calculation down to 20 minutes. We usually don't talk about hardware with SOA, but what sort of hardware is required for this?
In our case, the technology can be deployed across any combination of low cost commodity hardware. Would that be blade servers then?
Sure. Blade servers, Windows boxes, NT boxes, Linux boxes, Solaris boxes. Whatever you happen to have around and available for allocation. So you can have a hardware grid?
Yes. It's very complementary to virtualization that you may already have in place.

Dig Deeper on Enterprise application integration with cloud providers



Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.