Most IT organizations have come to recognize the benefits of componentized software in development and deployment. In the cloud, componentization brings important advantages, such as increased resiliency and support for horizontal scaling.
Microservices -- or small, functional elements that are often shared among applications -- can magnify these benefits considerably. But first, you have to plan, develop and deploy microservices properly.
Understand what makes microservices tick
To begin microservices planning, IT teams need to understand what makes microservices different than application components or elements of a service-oriented architecture. Microservices are not complete application pieces; they are designed to be shared, as services, among applications -- meaning multiple apps can invoke a single instance of a microservice at the same time. Microservices are also designed to use web-like RESTful interfaces.
If microservices don't fit the model above, they aren't likely to deliver as many benefits. When microservices do match the characteristics above, you need to sustain each of them in a hybrid or multicloud deployment.
Microservices' impact on multicloud networks
Because microservices are small pieces of functionality, they can divide applications into many successive requests for an external service. This service is accessed over a network that can introduce propagation delay and other network performance issues. It is critical that the network connection that links microservices to the applications that use them delivers the quality of service (QoS) needed to support users' experience. Before you deploy microservices, test their performance in all of the hosting variations across your hybrid or multicloud environment. If your QoS falls below acceptable levels, change your network connectivity to correct it. Alternatively, you could design your application deployment process so that services aren't moved to dead spots in your network.
Network performance issues in hybrid and multicloud applications are usually related to the way traffic passes through the multicloud -- or cloud and data center -- boundary points. Talk to your cloud providers, your VPN provider and your data center networking team to optimize connectivity. Be especially wary with multicloud applications, because many public cloud providers won't connect directly with other providers; they will expect to connect back through your VPN or data center network. If an application in one cloud uses a microservice in another, there could be a long potential propagation delay. If you can't reduce it, avoid crossing cloud provider boundaries with microservice access. You may need to deploy a duplicate of the service in each cloud to avoid such network performance issues.
The need for multiple applications to access a microservice may also require network accommodations. The easiest way to approach microservice access is to assume that you have a flat private network that joins all your clouds and data centers. That way, you can deploy microservices anywhere, and applications can reach them using standard IP mechanisms, such as URLs and Domain Name Services (DNS), or other service cataloging methods.
Another challenge occurs when a microservice moves from one cloud provider to another, or between a cloud provider and a data center. Normally, this kind of movement requires a change in the IP address, which means the logical name of the service will have to be associated with a different address after the move. Make sure your tools and practices for replacing a failed component make the necessary change to DNS or service catalog entries so your applications can find the microservice in its new location.
Deploy microservices securely
The fact that multiple applications often share a single microservice can create two other challenges in hybrid and multicloud environments: security and compliance, and stateful vs. stateless behavior.
Any time applications share functionality, there's a risk that an application with rigorous compliance requirements will be contaminated. This is because a shared service might provide outsiders with a portal for entry. Since moving microservices, or duplicating them under load, requires fairly open addressing, you need to secure each microservice with respect to its access. Avoid microservices that mix features demanding security and compliance monitoring with other features open to a larger community -- make them two different microservices.
Explore the stateful vs. stateless issue
The stateful vs. stateless issue is complex, even for software architects and developers. Applications typically support transactional activity that involves multiple steps or states. For example, imagine we have a service called "add two numbers." If we present the first number on one request and the second on another, other users could inadvertently introduce their own number between our two, and we'd get the wrong answer.
If microservices cannot save data between requests made to it, then make the requests stateless or ensure they can somehow convey the state, if needed. In our example, providing both numbers to be added eliminates the need for multiple requests, as well as the stateful behavior risk. It's also possible to have the request include a user ID that the microservice would associate, through a back-end database, with the state. When our first number is presented, the microservice would record that number in the database. Then, when the second is presented, it could add them and return the answer.
There's always a price to be paid for versatility, agility and flexibility -- and combining microservices with both hybrid and multiclouds represents the leading edge in our search for these three benefits. Plan carefully to minimize that price and deploy microservices that extend easily into a complex cloud future.
Explore how to build a microservices architecture
Understand the connection between cloud and microservices
Increase application efficiency with microservices