BACKGROUND IMAGE: stock.adobe.com
There are a lot of factors to consider, including APIs, when enterprises decide whether to go serverless. Every API has different requirements, so it's critical to carefully identify those that are, and aren't, suited for serverless infrastructure.
There are a few ways to determine when and why to use a serverless back end for an application. At a high level, there are two primary reasons to go serverless: reduced complexity and streamlined infrastructure.
Most APIs are, by nature, complex. Usability concerns aside, there are often many moving parts. From efficient data storage to implementations of business logic, there's a lot to manage. With the exception of large, monolithic APIs, serverless platforms are a great way to drastically reduce the complexity of both an application and an organization as a whole.
Unlike monolithic applications, applications built on a microservices architecture are perfect candidates for serverless infrastructure. The migration of an existing microservices-based application to a serverless back end enables developers to shrink those microservices into even more streamlined endpoints. This, in turn, reduces the cost of the underlying infrastructure, as each new serverless function only runs as needed.
Embrace DevOps to further reduce complexity
Over the years, DevOps platforms have introduced ways to merge development and operations teams, allowing them to become leaner and meaner. While this saves money, it also reduces complexity within an organization, because it removes the layers between different pieces of infrastructure. And the combined "ownership" of the infrastructure into one multidisciplinary team leads to shared knowledge and significantly less gatekeeping. This reduces the time it takes to design, develop and release new features.
Like microservices-based applications, smaller, single-feature APIs are perfect for serverless infrastructure. The smaller footprint of these APIs lets developers encapsulate them into smaller and more targeted serverless functions.
By definition, serverless platforms eliminate the need to think about the underlying servers. This introduces a new level of freedom that lets developers streamline everything from the size of their team to the amount of infrastructure an API requires -- all of which lower costs.
Serverless also simplifies scaling. APIs that need to scale unpredictably are a good fit for serverless platforms because of their inherent ability to scale as needed. By eliminating the need to manage the scalability of infrastructure resources, serverless enables teams to instead spend more time making fast and reliable APIs.
Serverless does, however, generally mean vendor lock-in. While this isn't necessarily a bad thing, it does mean that serverless infrastructure is better suited for applications and APIs already built on top of an existing cloud platform. APIs that are self-hosted or built on top of generic virtualization platforms miss out on the secondary benefit of serverless -- namely, easy integration between the other services that a cloud provider offers.
When not to go serverless
While serverless platforms can reduce costs and increase productivity, they're not perfect for all applications or workloads. As mentioned above, serverless platforms are an excellent choice for APIs and applications built with microservices but, unfortunately, are not a good fit for monolithic applications -- at least, not without significant overhaul.
In addition, the costs savings associated with serverless won't apply to longer-running application processes. Instead, run these processes on always-on infrastructure, such as a virtual server.
Applications or APIs that need to move between cloud provider platforms also aren't a great fit for serverless, because of the vendor lock-in risks. Each serverless platform has its own nuances, so the migration of applications between them is a difficult and time-consuming process.
Lastly, some applications might require adjustments to run at high performance on serverless infrastructure. A cold start is the amount of time it takes for a serverless application to execute its first function request. These requests can be slow, since the provider needs to allocate the appropriate resources to handle the serverless function. Applications that require a high degree of reliability on top of speed must accommodate for this issue manually by keeping the function in an activate state.
Ultimately, the decision to go serverless can have drastic consequences for an application -- beyond simply the infrastructure. The number of IT employees and the teams they sit on can be wildly different for serverless vs. more traditional virtualized infrastructures. Be sure to make this decision as thoughtfully as any other technical decision related to application development and design.