The Azure Kubernetes Service is one of many offerings that reduces overhead and eases Kubernetes cluster operation...
and deployment tasks -- but it's no silver bullet.
However, AKS customers need a basic understanding of Kubernetes fundamentals, such as networking, which still leaves new users bewildered. The service offloads tasks, such as cluster health monitoring and maintenance, but when new users want to create a new cluster, they must choose between Basic or Advanced networking options -- or risk poor performance and raised costs.
Basic networking is the default option for Azure Kubernetes Service. Microsoft manages the cluster and pod network configuration, so it's well-suited for those new to Azure and Kubernetes. But with the default option, users can't control the cluster's network configuration, including assigned subnets or the IP address ranges. This is a drawback if a cluster needs to integrate with an existing virtual private network (VPN).
With Basic networking, AKS nodes use the kubenet Kubernetes plug-in to manage the container network interface (CNI). Kubenet is quite rudimentary, with the following features and limitations:
- It is Linux-only, since it uses a Linux virtual bridge and a pair virtual Ethernet interfaces to wire each pod to their physical host system.
- The controller manager, Azure, assigns IP addresses.
- Users have no control over other network parameters, such as maximum transmission unit and the routing table for external traffic.
- There is no cross-node networking or network policy or integration with existing Azure virtual networks (VNet).
- It only supports up to 110 pods per node.
Basic networking can expose applications to external clients via the Azure Load Balancer, but its lack of support for existing VNets is a deal-breaker for any enterprise that uses Azure for production workloads. Every non-trivial Azure implementation will use one or more VNets to segment traffic and connect with internal networks. There is an open source project with scripts to deploy Azure Kubernetes Service in a separate VNet with kubenet, but the Advanced networking provides greater flexibility.
With the Advanced networking option, users can customize their Azure VNet when they deploy Kubernetes pods. Nodes use the Azure CNI plug-in instead of kubenet, which provides Windows containers support and third-party IP address management software and services. Enterprises can deploy clusters into an existing VNet or a new subnet defined during cluster configuration.
Containerized applications can communicate with other pods in the cluster and compute nodes in the VNet through private IP addresses unique to each pod. Applications within a pod can connect with Azure services on other VNets and to on-premises networks over a VPN or via ExpressRoute. Additionally, storage or database instances can share data with pods exposed through a Kubernetes service endpoint.
Advanced networking also controls the pod route table for connections to other VNets or to a virtual appliance such as a firewall. It does the same for the DNS service the cluster uses for service discovery, which is critical when using HTTP application routing.
Although enterprises can set up VNets with tightly restricted access policies, subnets that host an Azure Kubernetes Service cluster must allow outbound connectivity. Some other restrictions and limitations with Advanced networking include the following:
- Only one Azure Kubernetes Service cluster is allowed per subnet.
- Certain addresses are reserved for the Kubernetes service.
- The service account -- the service principal for a cluster -- needs at least network contributor permission on the VNet subnet that hosts the cluster.
- VNet subnets must be big enough to include a unique IP for each node in the cluster plus 30 addresses per node for pods.
- It requires a separate, unique address range for Kubernetes services that are smaller than a /12 Classless Inter-Domain Routing block.
- It only supports up to 30 pods per node.
Enterprises can set up Advanced networking when the clusters are deployed via the Azure portal, Azure CLI or with a Resource Manager template.
Networking best practices
There are three crucial steps Azure Kubernetes Service users shouldn't omit, even though they're not strictly related to networking: configuration of cluster monitoring, logging and access controls.
Azure Container Health service is one option for the first two that will automatically collect memory and processor metrics from controllers, nodes and containers. It will then feed that information to a containerized version of the Azure Operations Management Suite agent for Linux and store data in a central Azure Log Analytics workspace. Alternatively, users can manually configure Log Analytics to collect and analyze data from cluster master controllers.
In addition to the service principal account, users -- particularly those with production workloads -- should also set up access controls for the Kubernetes cluster. Ideally, the access-controls setup is done with user and group definitions in an existing directory so the Kubernetes API server is exposed as a public fully qualified domain name.