BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
In physical implementations of zero trust security -- a security model in which no user, interface or application...
is automatically "trusted" -- traffic flows through a centralized security device. But because a single device has to filter all traffic, it's difficult for a zero trust security policy to scale. Environments can scale, however, when the workloads and network are virtual or cloud-based.
In the data center, micro-segmentation, a byproduct of hypervisor-based network overlays, allows zero trust security to be applied at scale. And with cloud services, micro-segmentation is often inherent, according to a podcast with Adrian Cockcroft, a technology fellow at venture capitalist firm Battery Ventures and former cloud architect at Netflix. Here's a look at the micro-segmentation and zero trust security capabilities within various virtualization and cloud platforms.
VMware's network virtualization platform NSX filters any traffic that travels to and from the hypervisor. This capability creates zero trust security. VMware uses the scalability of NSX's distributed firewall to create zero trust security between virtual machines (VMs) on separate hosts. A security policy can also be created between hosts on the same logical Layer 2 broadcast network.
VMware's approach is to abstract physical zero trust security, while using the distributed attributes of hypervisor-based network overlays. Administrators create rules in a centralized management system that are enforced across a distributed firewall appliance. The result is a centrally-managed solution that can scale to double-digit Gpbs per hypervisor.
Amazon Web Services
In cloud platforms, customers and third-party products don't have direct access to the underlying hypervisor. This means customers must rely on services made available via cloud APIs to achieve zero trust security.
In the case of Amazon Web Services (AWS), it's important to understand the connectivity between the public cloud and an internal network. There are three ways to access instances running in AWS: the public Internet, IPsec-based Amazon Virtual Private Cloud (VPC) and AWS Direct Connect -- a dedicated Layer 3 circuit into Amazon facilities. All connectivity options require customer-side IP termination, which should be through a firewall. Amazon does not allow users' Layer 2 traffic to be extended over Direct Connect or VPC.
Based on the AWS connectivity design, there is inherently zero trust between AWS hosted instances and on-premises nodes. The question then becomes what happens to instance-to-instance traffic within AWS?
AWS uses security groups to control network access to instances. Security groups can be very broad or narrowly defined. An instance can have one or several security groups applied to it. While the focus is typically on IP traffic, it's important to know that VPC networking doesn't support broadcast or multicast traffic. So, there is no need to filter for non-IP traffic.
Developers can create or assign rules using either the AWS Management Console or the AWS API. They can also apply the rules to both inbound and outbound traffic. By default, all outbound traffic is allowed, while inbound traffic is denied. This granular capability can make it difficult to troubleshoot connectivity, as single instances can belong to multiple security groups. In addition, separate security groups exist for both Windows and Linux, which could result in conflicting or overlapping security policies. However, this can be the case with any zero trust security offering.
Google Compute Engine
Google Compute Engine networking is more similar to traditional networking. Each instance is assigned to a default network, which allows instance-to-instance traffic within the same network. Google provides a logical firewall that blocks traffic between networks. To achieve zero trust security, each instance must use a local firewall or iptables, or place each instance in a separate network. However, using local firewalls and iptables can be extremely difficult to manage and, because CPU cycles will be used to enforce the rules, it may impact VM performance.
Microsoft Azure's networking is similar to Google Compute Engine's networking. VMs are grouped in logical private networks. Azure allows for endpoint access control lists (ACLs), which are applied at the host level. Processing rules at the host level helps maintain local VM performance. Similar to Google, you can't apply rules for instances from groups in Azure. Therefore, keeping track of rules in a large environment can be a challenge.
Cloud providers have robust models for zero trust security and micro-segmentation. In the case of Microsoft Azure and AWS, developers can choose to integrate micro-segmentation security from within an application. This allows developers to create applications that scale and react to rapidly changing security.
How to build a scalable IaaS network
Adopting a zero trust security model
How virtualization enables hybrid cloud networking
The cloud security issues enterprises fear most in 2015