Serverless computing offers a highly efficient way to deploy and run software on demand. Its rise in popularity...
can be attributed to its simplicity, lower costs and faster time to market. But like any other technology, it needs proper security.
Serverless computing requires developers and ops teams to rethink their security approach. Follow these steps to limit your vulnerabilities and protect your serverless apps.
To improve serverless security, write minimalistic functions that call only the resources needed to achieve a given task. Minimalism decreases potential attack vectors and limits the potential ramifications of a vulnerability within one function. The fewer resources a function can access, the less harm attackers can do if they gain control of that function.
Also, write different functions for different tasks, and separate those functions from one another as much as possible. This isolation decreases the likelihood that a vulnerability in one function will affect other functions.
Use dependencies carefully. It is common to include dependencies from third-party repositories within serverless code; however, avoid this unless absolutely necessary because you won't have as much ability to secure them. The developers that created the code might not follow the same security standards as you, and, if problems arise, you will be dependent on the third-party developers to fix the issue -- and they may not fix it as quick as you need. In cases where dependencies are required, always include the latest stable versions of the ones you pull.
Keep careful inventory of the dependencies in serverless code, and use vulnerability detection tools to receive notification of any security problems discovered in those dependencies.
Analyze and test. To further ensure serverless security, analyze functions for potential vulnerabilities in their code. As teams often develop and deploy functions in a different pipeline than the rest of an app, it's crucial to remember to include them in routine security tests.
IT operations teams
The importance of monitoring serverless environments may seem obvious. However, since it can be difficult to properly monitor a serverless environment with existing enterprise security tools, this is an important point to emphasize for ops teams.
It's often possible to pull metrics from a serverless environment into a security information and event management (SIEM). But most legacy SIEM tools were not designed to detect anomalous behavior within event-driven frameworks. For example, conventional SIEMs might mark a process that runs briefly and then stops as an anomaly because that type of behavior is not typical on conventional infrastructure -- even though it is entirely normal for a serverless function. Customize SIEM policies to help a security analytics system understand serverless, or adopt a detection tool designed specifically for serverless security, such as PureSec or Twistlock.
Customize access policies. Cloud-based serverless platforms, such as AWS Lambda, offer preconfigured, identity-based access control policies that manage which users can invoke and monitor serverless functions.
These policies are useful starting points for serverless security, but don't exclusively rely on vendor-supplied configurations to control serverless resources. That's because they are a default option for general purposes and aren't designed to meet your specific needs. Also, attackers know the default configuration, so they can more find potential attack vectors.
Instead, take the vendor-supplied configuration that provides the least amount of access, then build up from there.
Use autoscaling wisely. Enterprises value serverless functions because they can quickly scale. However, if ops teams configure functions to scale rapidly without reasonable limits, attackers -- or just poorly written code -- can trigger a large volume of functions in a short time, which leads to significant costs.
Find a happy medium that lets functions scale as much as they need to for legitimate use, but also prevents costly abuse via autoscaling limits. It takes time to find this middle ground, and ops engineers may need to adjust it manually from time to time.