IP exhaustion in AKS got you down? Try Azure CNI Overlay.

One of the top concerns I see from companies when architecting AKS is running out of IP addresses. This is commonly known as IP exhaustion. This concern would come up when selecting the network model for AKS specifically with Azure CNI.

Companies would lean towards Azure CNI at first but quickly opt for Kubenet. Azure CNI provides benefits on Azure. It has deeper integration between Kubernetes and Azure networking. With Azure CNI you don’t have to manually configure routing for traffic to flow from pods to other resources on Azure VNets. Pods get full network connectivity and can be reached via their private IP address. Supports Virtual Nodes (Azure Container Instances), it supports either Azure or Calico Network Policies and Windows containers. Azure CNI does however require more IP address space. The traditional Azure CNI assigns an IP address to every Pod from a subnet reserved for pods or pre-reserved set of IPs on every node. This method can lead to exhausting available IPs.

The alternative to Azure CNI with AKS is Kubenet. A lot of companies opt for Kubenet to avoid IP Exhaustion as it conserves IP address space. Kubenet assigns private IP addresses to pods. It does not have routing to Azure networking. In order to route from pods to Azure VNets you need to manually configure and manage user-defined routes (UDRs). With Kubenet a simple /24 IP CIDR range is able to support up to 251 nodes in an AKS cluster. This would give you support IPs for up to 27,610 pods (at 110 pods per node).

With Azure CNI the same /24 IP CIDR range would be able to support up to 8 nodes in the cluster supporting up to 240 pods (default max of 30 pods per node w/Azure CNI. Allocation of 31 IP address; 1 for the node + 30 for Pods.).

Here is a side by side breakdown of Kubenet and Azure CNI:

AreaKubenetAzure CNI
Capacity using ‘/24’ address range251 nodes / 27,610 pods (110 pods / node)8 nodes / 240 pods (30 pods / node)
Max nodes per cluster400 (UDR max)1,000 (or more)
Network policyCalicoCalico, Azure
Pod IPsNAT’ed / UDR /Subnet-assigned
LatencySlightly greater (NAT hop)Best
Virtual nodesNoYes
Windows containersNoYes
SupportCalico community supportSupported by Azure support and the Engineering team
Out of the Box Logging/var/log/calico inside the containerRules added/deleted in IPTables are logged on every host under /var/log/azure-npm.log
ConclusionBest w/limited IP space Most pod comms within cluster UDR management is acceptableAvailable IP space Most pod coms outside cluster No need to manage UDR Need advanced features

As you can see you can get a lot more pods on Kubenet and you will burn through a lot more IP’s with Azure CNI. One would think when using Azure CNI to just assign a large CIDR for the subnets like /16 instead of /24. This would work however most IT teams in the enterprise that are connecting AKS to existing networks don’t have that option based on the existing IP design and are stuck working with smaller IP address ranges they can use.

Microsoft has built a solution to the IP exhaustion problem. The solution is Azure CNI Overlay. Azure CNI Overlay for AKS has been around for a while but was recently released into public preview on 9/4/22. Azure CNI Overlay for AKS helps us avoid IP exhaustion with our AKS clusters. It does this by assigning using a private /24 IP CIDR range and assigning IPs from this for pods on every node.

With Azure CNI overlay networking IPs from an Azure subnet are assigned only to the nodes in the AKS cluster. The pods do not get IPs assigned from an Azure subnet on the VNet hosting the nodes. Instead, the pods are assigned IPs from a private CIDR range that is different from the VNet hosting the AKS nodes. This design conserves IP address space from the VNet. A routing domain is automatically created for the pod’s private IP CIDR range in the Azure networking stack. This creates the overlay network for the pods facilitating pod-to-pod communication but not outside of the overlay network. Communication between the pods and resources outside of the overlay network is done using node IP through Network Address Translation (NAT). This new design allows for east-west and north-south traffic with AKS while saving IPs.

Keep in mind that Azure CNI overlay networking is still new and is in public preview. There are some limitations that should be noted before you adopt it a few are that Azure CNI overlay can only be enabled for new AKS clusters, existing AKS clusters can’t be configured to use overlay, it’s only supported with Linux, and AGIC is not currently supported.

Here is a side-by-side breakdown of Kubenet and Azure CNI Overlay (from Microsoft Overlay doc):

AreaKubenetAzure CNI Overlay
Cluster scale400 nodes and 250 pods/node1000 nodes and 250 pods/node
Network configurationComplex – requires route tables and UDRs on cluster subnet for pod networkingSimple – no additional configuration required for pod networking
Pod connectivity performanceAdditional hop adds minor latencyPerformance on par with VMs in a VNet
Kubernetes Network PoliciesCalicoAzure Network Policies, Calico
OS platforms supportedLinux onlyLinux only

For a deeper dive into Azure CNI Overlay for AKS visit the official documentation here: https://docs.microsoft.com/en-us/azure/aks/azure-cni-overlay

That brings us to the end of this blog post. If IP exhaustion with AKS is a concern give Azure CNI Overlay a try. Also, keep an eye out for updates on Azure CNI Overlay. Hopefully, it will release to GA in the near future.

Print Friendly, PDF & Email