I will be giving the Keynote at the next (MN)NUG (Minnesota Networking User Group) event on December 6th. (MN)NUG is a part of the (US)NUA (US Networking User Association) organization that hosts a vendor-neutral environment to talk networks. With beer.
After my talk, there will be a panel of networking experts from Target, Cologix, Cisco, and Arista Networks.
Here is my talk info:
Title:
Networking is more relevant than ever in a cloud-native world (cloud & Kubernetes)
Abstract:
Folks are often so focused on the new and shiny stuff in modern technology such as cloud, Kubernetes, and DevOps, however, core skills such as networking are as relevant as ever and one of the top skills one should have when working with these modern technologies. In this session, Steve is going to explore why you should start with networking when embarking on a journey into the cloud or Kubernetes.
Here is the full schedule:
4:30pm – 5:00pm Check-In / Open Mingle
5:00pm – 5:50pm Keynote Presentation Networking more relevant than ever in a cloud-native world (cloud & Kubernetes)
I am happy to announce I have been selected for the 2023 Pluralsight Author Advisory Board! I am honored to serve among 10 fellow authors.
As a member of this board, we will advocate for authors and help to foster the relationship between Pluralsight and the authors. Serving on this board, serving the authors, and Pluralsight will be an exciting part of 2023’s journey. I am looking forward to it!
I am excited to announce that I published a Python course on Pluralsight! This course is titled “Python for Linux System Administration“. This is my 17th course with Pluralsight overall and 6th for 2022. It will teach you how to utilize Python to administer Linux servers. This course is suitable for system administrators, DevOps engineers, and anyone working with servers running on cloud.
In this course, Python for Linux System Administration, you’ll learn the core of the Python language, various Python admin modules, & how to combine python scripts with other Linux tools for administration.
Here are some of the major topics that are covered in this course:
Gaining an understanding of the many benefits of using Python for systems administration.
What goes into setting up a Python environment and what IDE’s are the best options to assist you with your Python scripting.
How to install and use Python to Interact with the Linux System.
How to utilize Python modules such as the OS module & psutil to perform various administrative functions.
When you’re finished with this course, you’ll have the skills and knowledge about the core of Python, how it can be used for administration, why you would use it for administration, how to get setup for Python scripting, insight into Python administrative modules & scripts, as well as some real-world examples of administering Linux servers with Python.
I hope you find value in this new Python for Linux System Administration course. Be sure to follow my profile on Pluralsight so you will be notified as I release new courses!
I will be speaking at Tech Summit Nigeria 2022. This event is to be held in Lagos, Nigeria at the Microsoft ADC center. It is an event for Cloud & Mixed- Reality professionals & enthusiasts. The website for the is https://www.techsummitnigeria.com.
My session title is: “K8s is complex! Simplify its Deployment & Configuration“.
The abstract is: Understanding Kubernetes is complex. Designing its architecture is complex. Deploying it is complex. And Configuring it is complex. K8s in general are complex. Spend less time on getting your Kubernetes up and running and more time running your containerized apps!
In this session, Steve Buchanan will take you on a journey utilizing a tool named the AKS Construction Helper that can simplify your AKS Deployment & Configurations.
***Update***
It was a fun session with an engaged audience! Here are some pictures from the session.
If you missed my session you can watch the replay here:
Yesterday a new article titled “Build and deploy apps on AKS using DevOps and GitOps” was published. This is an article I was working on for a while and it is the first item of work that I can share publicly since joining Microsoft. I am working on many other things I can’t share publicly at the moment. :-)!
The article is a part of the Azure Architecture Center. This article is about modernizing end-to-end app build and deploy using containers, continuous integration (CI) via GitHub Actions for build and push to an Azure Container Registry, as well as GitOps via Argo CD for continuous deployment (CD) to an AKS cluster.
The article explores deploying a Python and Flask based app via two CI/CD approaches push-based and pull-based (GitOps). It is complete with a pros and cons comparison of both approaches and architecture diagrams for each that you can download. Here is a screenshot of the pull-based (GitOps) architecture:
The technologies used in this article and scenario include:
I hope that you find all of this useful. Now go check out the article and deploy the app using the approaches. Stay tuned for more from me at Microsoft and for more blog posts here!
One of the top concerns I see from companies when architecting AKS is running out of IP addresses. This is commonly known as IP exhaustion. This concern would come up when selecting the network model for AKS specifically with Azure CNI.
Companies would lean towards Azure CNI at first but quickly opt for Kubenet. Azure CNI provides benefits on Azure. It has deeper integration between Kubernetes and Azure networking. With Azure CNI you don’t have to manually configure routing for traffic to flow from pods to other resources on Azure VNets. Pods get full network connectivity and can be reached via their private IP address. Supports Virtual Nodes (Azure Container Instances), it supports either Azure or Calico Network Policies and Windows containers. Azure CNI does however require more IP address space. The traditional Azure CNI assigns an IP address to every Pod from a subnet reserved for pods or pre-reserved set of IPs on every node. This method can lead to exhausting available IPs.
The alternative to Azure CNI with AKS is Kubenet. A lot of companies opt for Kubenet to avoid IP Exhaustion as it conserves IP address space. Kubenet assigns private IP addresses to pods. It does not have routing to Azure networking. In order to route from pods to Azure VNets you need to manually configure and manage user-defined routes (UDRs). With Kubenet a simple /24 IP CIDR range is able to support up to 251 nodes in an AKS cluster. This would give you support IPs for up to 27,610 pods (at 110 pods per node).
With Azure CNI the same /24 IP CIDR range would be able to support up to 8 nodes in the cluster supporting up to 240 pods (default max of 30 pods per node w/Azure CNI. Allocation of 31 IP address; 1 for the node + 30 for Pods.).
Here is a side by side breakdown of Kubenet and Azure CNI:
Area
Kubenet
Azure CNI
Capacity using ‘/24’ address range
251 nodes / 27,610 pods (110 pods / node)
8 nodes / 240 pods (30 pods / node)
Max nodes per cluster
400 (UDR max)
1,000 (or more)
Network policy
Calico
Calico, Azure
Pod IPs
NAT’ed / UDR /
Subnet-assigned
Latency
Slightly greater (NAT hop)
Best
Virtual nodes
No
Yes
Windows containers
No
Yes
Support
Calico community support
Supported by Azure support and the Engineering team
Out of the Box Logging
/var/log/calico inside the container
Rules added/deleted in IPTables are logged on every host under /var/log/azure-npm.log
Conclusion
Best w/limited IP space Most pod comms within cluster UDR management is acceptable
Available IP space Most pod coms outside cluster No need to manage UDR Need advanced features
As you can see you can get a lot more pods on Kubenet and you will burn through a lot more IP’s with Azure CNI. One would think when using Azure CNI to just assign a large CIDR for the subnets like /16 instead of /24. This would work however most IT teams in the enterprise that are connecting AKS to existing networks don’t have that option based on the existing IP design and are stuck working with smaller IP address ranges they can use.
Microsoft has built a solution to the IP exhaustion problem. The solution is Azure CNI Overlay. Azure CNI Overlay for AKS has been around for a while but was recently released into public preview on 9/4/22. Azure CNI Overlay for AKS helps us avoid IP exhaustion with our AKS clusters. It does this by assigning using a private /24 IP CIDR range and assigning IPs from this for pods on every node.
After designing and architecting AKS the next step is to deploy your cluster/s. It is ideal to build your AKS deployments out as code.
This means taking your Azure infrastructure & AKS cluster/s design and scripting them as IaC (Infrastructure as Code). Scripting the AKS deployment vs manually deploying gives you documentation as code, standardization, & a templatized deployment for repeatability. You can deploy this code as is, place it in a pipeline for ease of deployment, in inner-source, or in a service catalog for access by multiple teams.
Microsoft has built a tool named the AKS Construction helper to accelerate building out your IaC for AKS. This tool is not as well-known as it should be. I wanted to blog about this tool to share this great resource that will save you tons of time. The AKS Construction helper was originally launched by Keith Howling of Microsoft. The core contributors to this tool have been Gordon Byers and Keith Howling with contributions from others as well.
The AKS Construction helper unifies guidance provided by the AKS Secure Baseline, Well Architected Framework, Cloud Adoption Framework, and Enterprise-Scale. It also is part of the official AKS Landing Zone Accelerator (Enterprise Scale). The AKS Construction helper lets you configure your AKS deployment using wizard/form style selections. After you complete your selections the tool gives you IaC code that you can copy to perform the AKS Deployment/s. You can get code for Az CLI, a Github Actions workflow, Terraform, or a Parameters file that can be used with an ARM Template.
Let’s go ahead and take a tour of the tool.
The tool lets you select Operations Principles or Enterprise-Scale path for configuring the options.
This helps narrow down the overall design requirements of your AKS deployment.
The next section of the AKS Construction helper is to fine-tune your AKS deployment. This gives you the chance to tweak things like the cluster name, K8s version, resource group, region, to be created, IP and Cider, initial RBAC, SLA, autoscaling, upgrade configuration, cluster networking, add ons such as an ingress controller (App Gateway, NGINX, etc), monitoring such as Azure Monitor, Azure policy, service mesh, secret storage, Keda, GitOps with Flux, and even has a few options to deploy some sample apps. This is done across 5 tabs in the Fine tine and Deploy section.
After you have set all of the configurations for your cluster there is code available for you to copy on the Deploy tab. Again you have options for Az CLI, a Github Actions workflow, Terraform scripts or an ARM Template Parameters file. Running the deployment code will deploy your AKS cluster exactly how you have it configured in the AKS Construction helper tool.
You can access this URL at any time to pick up where you left off with your AKS deployment configuration.
That brings us to the end of this blog post. Stop wasting time, head over to the tool, and start using this for all of your AKS Deployments. Here are the links for the tool:
With Kubernetes you will eventually, have the need to run stateful applications in Kubernetes. This is more common than you think. If you have never run stateful apps on Kubernetes before this can be a scary thing adding more moving parts to a Kubernetes cluster, deploying the app, as well as managing your stateful application/s on Kubernetes when it requires state.
In this blog post I am going to take you on a short journey to gain an understanding of Stateless vs Stateful applications, how storage works in Kubernetes touching on volumes, storage classes, persistent volumes (PC), and persistent volume claims (PVC), what Stateful Sets are, about Persistent state with pods, and good practices for running Stateful Apps on Kubernetes.
Stateless
A stateless app is an application program that does not save client data generated in one session for use in the next session with that client.
Stateful
A stateful app is a program that saves client data from the activities of one session for use in the next session.
The data that is saved is called the application’s state. Here is a visual covering the differences between Stateless and Stateful applications:
Volumes
Here is a breakdown of what volumes are:
A volume is a directory, typically with data in it, that is accessible to the containers in a pod.
A volume represents a way to store, retrieve, and persist data across pods through an applications lifecycle.
Volume modes in Kubernetes supports are Filesystem or Block.
Volumes are backed by different types of storage such as NFS, iSCSI, or other cloud storage (i.e. awsElasticBlockStore, azureDisk, gcePersistentDisk etc..).
When pods ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes.
StorageClasses
Here is a breakdown of what volumes are:
Define types of storage tiers like Premium and Standard through Storage Classes in Kubernetes.
Give K8s admins a way to describe the “classes” of storage they offer.
StorageClasses define the provisioner, parameters, and reclaimPolicy used when a PersistentVolume is provisioned.
When a pod is deleted the underlying storage resource can either be deleted or kept for use with a future pod.
A reclaim Policy controls the behavior of the underlying storage resource when pod & the its persistent volume are no longer required.
Example of a configuration file for a StorageClass:
Allows for manual reclamation of the resource. The PV is not available for another claim due to previous claimant’s data remaining on the volume. A K8s admin must manually reclaim the volume.
Delete –
The delete reclaim policy removes the PV resource from the K8s cluster, & the associated storage asset such as cloud storage, NFS etc…
Recycle –
Performs a basic scrub on the volume & makes it available again for a new PVC.
Persistent Volumes (PVs)
Here is a breakdown of what Persistent Volumes are:
A persistent volume (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.
A Persistent Volume can be manually provisioned by an Kubernetes admin or dynamically provisioned using Storage Classes by the Kubernetes API server.
Dynamic provisioning uses a StorageClass to identify what type of storage (NFS, iSCSI, or cloud-based) needs to be created.
Example of a configuration file for the PersistentVolume:
Let’s take a look at how the lifecycle of volumes and claims flow:
StatefulSets
Here is a breakdown of what Stateful Sets are:
StaefulSets are Kubernetes objects that are used when we need each pod to have its own independent state & use its own individual volume.
With StatefulSets each pod is assigned a unique name & the unique name stays with it even if the pod is deleted & recreated.
Headless services are primarily used when we deploy statefulset applications. Headless services don’t operate like load balancers. Headless services are not assigned IPs like a regular service is.
StatefulSets are typically used when the following is needed:
unique network identifiers for pods
persistent storage for retaining data
Ordered, graceful deployment, & scaling of pods
Ordered, & automated rolling updates of the app
Some Good Practices When Running Stateful Apps on Kubernetes
That wraps up this blog post! Thanks for reading and stay tuned to my blog for more content on Kubernetes soon.
Argo CD has something called the Application reconciliation timeout. This is how often your applications will sync from Argo CD to the Git repository. It looks for changes and when it sees changes it will then apply the desired state from the repo to the Kubernetes (K8s) cluster. By default the timeout period is set to 3 minutes. This is set in the General Argo CD configuration.
The General Argo CD configuration is set in the argocd-cm ConfigMap. And the argocd-cm ConfigMap is deployed in the argocd namespace.
You can view what is currently set by running the following kubectl command on your K8s cluster that is running your Argo CD instance:
kubectl describe configmaps argocd-cm -n argocd
The output will look like the following:
You can also see that the argocd-cm Data is empty by running kubectl get configmaps -n argocd or if you are using AKS navigate to ConfigMaps in the Azure portal like in the following screenshot.
Most Argo CD instances are running the default settings for its configurations. The argocd-server component reads and writes to the argocd-cm ConfigMap and other Argo configuration ConfigMaps based on admin user interactions with the Argo CD web UI or the Argo CD CLI. It is normal for it to be empty with Data at 0 if you have not changed any defaults or set anything directly in the ConfigMap yet.
To change the Application reconciliation timeout you need to do the following:
The Application reconciliation timeout can be found on line 283 “timeout.reconciliation: 180s”.
Change “180s” to whatever number you want to change it to i.e. change to “60s” to reduce the sync internal to 1 minute.
Remove all of the other settings in the file except for the Application reconciliation timeout. The file should look like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
# Application reconciliation timeout is the max amount of time required to discover if a new manifests version got
# published to the repository. Reconciliation by timeout is disabled if timeout is set to 0. Three minutes by default.
# > Note: argocd-repo-server deployment must be manually restarted after changing the setting.
timeout.reconciliation: 60s
5. Save the file.
6. Connect to the Kubernetes cluster that is running Argo CD and apply the argocd-cm ConfigMap file you just updated by running the following:
kubectl apply -f argocd-cm.yaml -n argocd
7. Run the following to verify the update was applied:
kubectl describe configmaps argocd-cm -n argocd
You should also notice at least 1 is listed under Data for the ConfigMap now.
8. It is a good practice to redeploy the argocd-repo-server after updating the argocd-cm ConfgigMap. You can redeploy the argocd-repo-server by running the following:
That’s it! Now your app in Argo CD will sync on the new Application Reconciliation Timeout that you set. Check back soon for more Azure, Cloud, Kubernetes, GitOps, Argo CD content and more.
BTW: For more in-depth information on GitOps and Argo CD check out my GitOps and Argo CD courses on Pluralsight here:
Recently Codefresh launched the 1st certification in its GitOps certification path. This one is called “GitOps Fundamentals“. You can find it here: https://codefresh.learnworlds.com .
It takes you through the basics of GitOps to gain theoretical knowledge, and how to utilize Argo CD as the GitOps operator to gain hands-on knowledge. You will learn about both and will have questions on both in the quizzes and final exam.
They also touch on Argo Rollouts to go over Progressive Delivery with topics such as blue/green deployments and canary deployments. This is the 1st ever GitOps certification and it’s free! They do have plans for GitOps at Edge and GitOps at Scale certifications.
You can find more information about the GitOps certification and Codefresh’s future plans for it on this blog by Hannah Seligson (one of the authors of the course and exam) here: https://codefresh.io/blog/get-gitops-certified-argo.
I jumped all over this opportunity to get certified on GitOps, by signing up for the course, taking the training, and the exam! I passed and now I am GitOps certified.
Here is the certification:
GitOps is gaining adoption more and more every day in the Kubernetes space. Also, Argo CD is growing extremely fast as one of the top if not the top GitOps operator. I recommend you check this Codefresh GitOps certification out and get GitOps certified as this pattern and the technology behind it are growing at a super fast rate.
Also note, it looks like Weaveworks is planning to launch a “Certified GitOps Practitioner (CGP)” certification soon. I would guess the Weaveworks GitOps certification will contain content on Flux another GitOps operator. You can learn more about their coming GitOps certification here: https://www.weave.works/certified-gitops-practitioner
Also for more training on GitOps and Argo CD be sure to check out my GitOps and Argo CD courses on Pluralsight here: