New Platform Engineering Blog Post on Pluralsight

I am excited to announce my second ever blog on Pluralsight.com. This blog is about Platform Engineering. In this post I break down what platform engineering is, the business problems it solves, and how to know if your organization is ready to roll it out yet.

In the blog post, we explore why there is so much hype around platform engineering, if Platform Engineering is a replacement for DevOps, how Internal Developer Platforms help resolve the infrastructure gaps, and more. Be sure to check it out!

👉 Read the blog post here:

https://www.pluralsight.com/resources/blog/it-ops/what-is-platform-engineering

Read more

New GitOps Blog Post on Pluralsight

I am excited to announce that I was able to contribute a blog on Pluralsight.com. This blog is about GitOps. It will take you through what GitOps is, and why you should learn it.

In the blog post, look at the the benefits of GitOps for developers, work through GitOps tools and frameworks, what you need to get started with GitOps, and more. Be sure to check it out!

👉 Read the blog post here:

https://www.pluralsight.com/resources/blog/it-ops/what-is-gitops

Read more

Azure Friday: Exploring Automated Deployments for AKS with Steve Buchanan and Scott Hanselman

Hey everyone, today I’m super excited to tell you about a recent episode of Azure Friday that I was lucky enough to be a guest on.

Azure Friday is a weekly video series hosted by the legendary Scott Hanselman, where he interviews experts and developers on various Azure-related topics. In this episode, we talked about Automated Deployments for AKS, a new feature that makes it super easy to deploy your apps to Azure Kubernetes Service.

If you’re not familiar with AKS, it’s a managed Kubernetes service that lets you run containerized applications on Azure without having to worry about the complexity of managing the cluster. It’s a great way to scale your apps and take advantage of the benefits of Kubernetes, such as high availability, load balancing, and service discovery.

But what if you’re not familiar with containers or Kubernetes? What if you just have some code in a GitHub repo and you want to run it on AKS? That’s where Automated Deployments for AKS come in. It’s a feature that simplifies the Kubernetes development process by taking care of the tedious work of containerization for you. It uses a tool called Draft, which automatically detects the language and framework of your app, creates a Dockerfile and a Helm chart for you, builds and pushes the image to Azure Container Registry, and deploys the app to AKS. All with just a few clicks in the Azure Portal.

Sounds amazing, right? Well, that’s what I wanted to show Scott in this episode. I had an app hosted in a GitHub repo that I wanted to run on AKS. The app was a simple web app that displayed some data from a database. I had already created a few resources in Azure, such as a resource group, an Azure Container Registry, and an AKS cluster. All I needed to do was use Automated Deployments for AKS to get this app from code to running on a cluster.

So how did it go? Well, you’ll have to watch the episode to find out. But spoiler alert: it was super easy and fast. In just a few commands, I went from code to an app running on AKS. Scott was impressed and so was I. We had a great time chatting about how Automated Deployments for AKS works under the hood, some of the benefits and limitations of using it, and how it can help developers get started with containers and Kubernetes.

Check out the episode here:

https://aka.ms/azfr/749

With Automated Deployments, Microsoft is opening up new avenues for developers to embrace the power of containers and AKS, enabling them to effortlessly build scalable and robust applications.

If you’re interested in learning more about Automated Deployments for AKS, you can check out the documentation here: https://learn.microsoft.com/en-us/azure/aks/automated-deployments. It’s available today in public preview, so you can try it out for yourself and see how easy it is to run your apps on AKS.

That’s all for today. I hope you enjoy this episode of Azure Friday as much as I did. It was an honor and a pleasure to be a guest on Scott’s show and talk about one of my favorite topics: Azure Kubernetes Service. If you have any questions or feedback, feel free to leave a comment or reach out to me on Twitter at @Buchatech. Thanks for reading and happy coding!

Read more

Combining Kubernetes Community and Careers

I was a guest on a very popular cloud podcast. This is one of the longest-running cloud podcasts around starting in 2011. It is the Cloudcast Podcast.

I was on episode #714 titled “Combining Kubernetes Community and Careers”. In this episode, I had a great time chatting with Aaron Delp about my journey in the Kubernetes community, building a personal brand through education and sharing, content creation, and maintaining a healthy work-life balance.

Here are the show notes breaking down the topics:

Topic 1 – Today we are going to be talking about careers and Kubernetes. Steve, welcome to the show! You have a super fascinating career journey, can you give everyone a quick introduction?

Topic 2 – I heard you over on the Kubernetes Unpacked podcast. First off, it’s hard to keep up with everything you are doing in the community these days. What is your current focus and passion? Have you reached 20 courses on Pluralsight yet?!

Topic 3 – How do you balance the day job (Program Manager for AKS) and the nights and weekends (PluralSight courses, blogging, podcasts, etc.)? Besides learning and sharing, what benefits are you seeing with this approach?

Topic 4 – I believe your journey parallels our journey here. We started the podcast to learn and give back to the community. Prior to the podcast, blogging was the big thing (we are completely aging ourselves I know) but I think it is safe to say blogging isn’t a primary source today. How would you recommend folks new to the industry get started sharing their journey? Where is the most “bang for your buck” these days?   

Topic 5 – Let’s talk about Kubernetes and specifically AKS, what are customers finding new and interesting? What are the leading solutions and integrations you see combined with AKS? How do you create a “stack” in AKS (GitHub Actions, Azure Container Registry, etc.)

You can listen to the full episode here:

https://www.buzzsprout.com/3195/12719684-combining-kubernetes-community-and-careers

Read more

Guest on StreamingClouds – Navigating AKS: Scenarios and strategies, GitOps, Fleet Management, Platform Engineering and more

I recently was a guest on StreamingClouds. StreamingClouds is a multicloud live stream by Microsoft CSA Kevin Evans and Microsoft MVP Robin Smorenburg. With topics ranging from cloud native to hybrid, security, architecture, strategy, careers, personal development, and more.

StreamingClouds is more than just a live stream podcast its also a diverse community where the members can all learn from each other.

To highlight what we covered in the episode, we discussed how to effectively use Microsoft’s AKS documentation, reference architectures, scripts, and tools for your AKS project. We also touched on GitOps, Fleet Management, Platform Engineering and more.

Here is a full description of what we covered on the episode:
Starting an AKS project soon or in the middle of one and lost? Have you tried to use the Microsoft AKS documentation, reference architectures, scripts, and tools but feel stuck on what to use and when to use it? Let’s talk about it and get you the guidance you need. There is a formula and framework to using these AKS artifacts from Microsoft.

In 2022 I wrote a couple of blog posts that give guidance on how to utilize the Microsoft AKS artifacts and tools. In these blog posts I baked in experience from my days delivering AKS projects to Fortune 500 enterprises. We thought it would be a good idea to dive into the content from these live on the podcast talking through these topics to help listeners who are embarking on an AKS journey. Here aforementioned blog posts for reference:

We dove into:

Architecture Design:
Baseline architecture for an Azure Kubernetes Service (AKS) cluster
AKS Secure Baseline with Private Cluster
AKS baseline for multi-region clusters
AKS regulated cluster for PCI
Advanced Azure Kubernetes Service (AKS) microservices architecture

Deployment:
AKS landing zone accelerator
AKS Construction Helper
AKS Baseline Automation
Azure Draft for AKS

Operation:
Operations management considerations for Azure Kubernetes Service
Azure Kubernetes Services (AKS) day-2 operations guide

You can watch a recording of the stream here:

Read more

Kubernetes Panel Event

In February Come Cloud With Us is hosting a Kubernetes panel with some of the industry’s BEST Kubernetes experts. I am honored and humbled to be one of the panelists. This panel consists of K8s experts from Dell, Google, Microsoft, Intercept, United Wholesale Mortgage, and Admincontrol. This is a global panel with panelists and hosts from the United States, Norway, United Kingdom, and Canada. Several of the panelists are also authors, Microsoft MVP’s and CNCF Ambassadors.

Here is a breakdown of the hosts and the panelists:

The hosts:

Abdul Kazi – Cloud Expert

Chris Gill – Cloud Expert and Microsoft MVP

The K8s Panelists:
Kristina Devochko – Microsoft Azure MVP
Kaslin Fields – Developer Advocate at Google
Kat Cosgrove – Lead Developer Advocate at Dell
Steve Buchanan – Principal Program Manager at Microsoft
Nills Franssens – Director of Digital and Application Innovation at Microsoft
Richard Hooper – Microsoft Azure MVP
Glen Belton – Kubernetes Platform Engineer

The panel will discuss Kubernetes and answer attendee questions. This will be a virtual event. This will be an event that you DON’T want to miss! Mark your calendars for the event on Thursday, February 16, 2023 4:00PM-5:30PM CST!

Register for the event here:

https://www.meetup.com/comecloudwithus/events/290494259

***Update

If you missed the live panel here is the recording for it:

Read more

Guest on Kubernetes Unpacked Podcast EP014 – “Using GitOps and AKS to Build and Deploy Apps

I recently was a guest on Michael Levan‘s Kubernetes Unpacked Podcast on the Packet Pushers network.

This is Kubernetes Unpacked episode #014 it is titled: “Using GitOps And AKS To Build And Deploy Applications

Michael and I talked about using GitOps and Azure Kubernetes Service (AKS) to automate the building and deployment of applications. We also chat about an entire architecture incorporating AKS, GitHub Actions, Azure Container Registry, GitHub, and ArgoCD along with how it all comes together to make a useful stack. Check out the podcast below.

Link to the podcast:

https://packetpushers.net/podcast/kubernetes-unpacked-014-using-gitops-and-aks-to-build-and-deploy-applications/

Listen here:

Read more

IP exhaustion in AKS got you down? Try Azure CNI Overlay.

One of the top concerns I see from companies when architecting AKS is running out of IP addresses. This is commonly known as IP exhaustion. This concern would come up when selecting the network model for AKS specifically with Azure CNI.

Companies would lean towards Azure CNI at first but quickly opt for Kubenet. Azure CNI provides benefits on Azure. It has deeper integration between Kubernetes and Azure networking. With Azure CNI you don’t have to manually configure routing for traffic to flow from pods to other resources on Azure VNets. Pods get full network connectivity and can be reached via their private IP address. Supports Virtual Nodes (Azure Container Instances), it supports either Azure or Calico Network Policies and Windows containers. Azure CNI does however require more IP address space. The traditional Azure CNI assigns an IP address to every Pod from a subnet reserved for pods or pre-reserved set of IPs on every node. This method can lead to exhausting available IPs.

The alternative to Azure CNI with AKS is Kubenet. A lot of companies opt for Kubenet to avoid IP Exhaustion as it conserves IP address space. Kubenet assigns private IP addresses to pods. It does not have routing to Azure networking. In order to route from pods to Azure VNets you need to manually configure and manage user-defined routes (UDRs). With Kubenet a simple /24 IP CIDR range is able to support up to 251 nodes in an AKS cluster. This would give you support IPs for up to 27,610 pods (at 110 pods per node).

With Azure CNI the same /24 IP CIDR range would be able to support up to 8 nodes in the cluster supporting up to 240 pods (default max of 30 pods per node w/Azure CNI. Allocation of 31 IP address; 1 for the node + 30 for Pods.).

Here is a side by side breakdown of Kubenet and Azure CNI:

AreaKubenetAzure CNI
Capacity using ‘/24’ address range251 nodes / 27,610 pods (110 pods / node)8 nodes / 240 pods (30 pods / node)
Max nodes per cluster400 (UDR max)1,000 (or more)
Network policyCalicoCalico, Azure
Pod IPsNAT’ed / UDR /Subnet-assigned
LatencySlightly greater (NAT hop)Best
Virtual nodesNoYes
Windows containersNoYes
SupportCalico community supportSupported by Azure support and the Engineering team
Out of the Box Logging/var/log/calico inside the containerRules added/deleted in IPTables are logged on every host under /var/log/azure-npm.log
ConclusionBest w/limited IP space Most pod comms within cluster UDR management is acceptableAvailable IP space Most pod coms outside cluster No need to manage UDR Need advanced features

As you can see you can get a lot more pods on Kubenet and you will burn through a lot more IP’s with Azure CNI. One would think when using Azure CNI to just assign a large CIDR for the subnets like /16 instead of /24. This would work however most IT teams in the enterprise that are connecting AKS to existing networks don’t have that option based on the existing IP design and are stuck working with smaller IP address ranges they can use.

Microsoft has built a solution to the IP exhaustion problem. The solution is Azure CNI Overlay. Azure CNI Overlay for AKS has been around for a while but was recently released into public preview on 9/4/22. Azure CNI Overlay for AKS helps us avoid IP exhaustion with our AKS clusters. It does this by assigning using a private /24 IP CIDR range and assigning IPs from this for pods on every node.

Read more

Simplify your AKS IaC Deployments using the AKS Construction Helper tool

After designing and architecting AKS the next step is to deploy your cluster/s. It is ideal to build your AKS deployments out as code.

This means taking your Azure infrastructure & AKS cluster/s design and scripting them as IaC (Infrastructure as Code). Scripting the AKS deployment vs manually deploying gives you documentation as code, standardization, & a templatized deployment for repeatability. You can deploy this code as is, place it in a pipeline for ease of deployment, in inner-source, or in a service catalog for access by multiple teams.

Microsoft has built a tool named the AKS Construction helper to accelerate building out your IaC for AKS. This tool is not as well-known as it should be. I wanted to blog about this tool to share this great resource that will save you tons of time. The AKS Construction helper was originally launched by Keith Howling of Microsoft. The core contributors to this tool have been Gordon Byers and Keith Howling with contributions from others as well.

The AKS Construction helper unifies guidance provided by the AKS Secure BaselineWell Architected FrameworkCloud Adoption Framework, and Enterprise-Scale. It also is part of the official AKS Landing Zone Accelerator (Enterprise Scale). The AKS Construction helper lets you configure your AKS deployment using wizard/form style selections. After you complete your selections the tool gives you IaC code that you can copy to perform the AKS Deployment/s. You can get code for Az CLI, a Github Actions workflow, Terraform, or a Parameters file that can be used with an ARM Template.

Let’s go ahead and take a tour of the tool.

The tool lets you select Operations Principles or Enterprise-Scale path for configuring the options.

This helps narrow down the overall design requirements of your AKS deployment.

The next section of the AKS Construction helper is to fine-tune your AKS deployment. This gives you the chance to tweak things like the cluster name, K8s version, resource group, region, to be created, IP and Cider, initial RBAC, SLA, autoscaling, upgrade configuration, cluster networking, add ons such as an ingress controller (App Gateway, NGINX, etc), monitoring such as Azure Monitor, Azure policy, service mesh, secret storage, Keda, GitOps with Flux, and even has a few options to deploy some sample apps. This is done across 5 tabs in the Fine tine and Deploy section.

After you have set all of the configurations for your cluster there is code available for you to copy on the Deploy tab. Again you have options for Az CLI, a Github Actions workflow, Terraform scripts or an ARM Template Parameters file. Running the deployment code will deploy your AKS cluster exactly how you have it configured in the AKS Construction helper tool. 

What if you are not ready to deploy your AKS Clusters now but you do not want to lose your configuration? The tool has you covered. At the end of the Deploy Cluster code you can click the link as shown in the screenshot to get a URL for your configuration.

The URL will look similar to this:

https://azure.github.io/AKS-Construction/?deploy.deployItemKey=deployArmCli&ops=oss&preset=defaultOps&deploy.location=EastUS2&addons.ingress=nginx&addons.monitor=aci&addons.openServiceMeshAddon=true&addons.fluxGitOpsAddon=true

You can access this URL at any time to pick up where you left off with your AKS deployment configuration.

That brings us to the end of this blog post. Stop wasting time, head over to the tool, and start using this for all of your AKS Deployments. Here are the links for the tool:

The wizard-driven tool can be found here:

https://azure.github.io/AKS-Construction

The GitHub Repository for the tool can be found here:

https://github.com/Azure/AKS-Construction

Read more

Running Stateful Apps in Kubernetes

With Kubernetes you will eventually, have the need to run stateful applications in Kubernetes. This is more common than you think. If you have never run stateful apps on Kubernetes before this can be a scary thing adding more moving parts to a Kubernetes cluster, deploying the app, as well as managing your stateful application/s on Kubernetes when it requires state.

In this blog post I am going to take you on a short journey to gain an understanding of Stateless vs Stateful applications, how storage works in Kubernetes touching on volumes, storage classes, persistent volumes (PC), and persistent volume claims (PVC), what Stateful Sets are, about Persistent state with pods, and good practices for running Stateful Apps on Kubernetes.

Stateless

A stateless app is an application program that does not save client data generated in one session for use in the next session with that client.

Stateful

A stateful app is a program that saves client data from the activities of one session for use in the next session.

The data that is saved is called the application’s state. Here is a visual covering the differences between Stateless and Stateful applications:

Volumes

Here is a breakdown of what volumes are:

  • A volume is a directory, typically with data in it, that is accessible to the containers in a pod.
    • A volume represents a way to store, retrieve, and persist data across pods through an applications lifecycle.
    • Volume modes in Kubernetes supports are Filesystem or Block.
    • Volumes are backed by different types of storage such as NFS, iSCSI, or other cloud storage (i.e. awsElasticBlockStore, azureDisk, gcePersistentDisk etc..).
    • When pods ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes.

StorageClasses

Here is a breakdown of what volumes are:

  • Define types of storage tiers like Premium and Standard through Storage Classes in Kubernetes.
    • Give K8s admins a way to describe the “classes” of storage they offer.
    • StorageClasses define the provisioner, parameters, and reclaimPolicy used when a PersistentVolume is provisioned.
    • When a pod is deleted the underlying storage resource can either be deleted or kept for use with a future pod.
    • A reclaim Policy controls the behavior of the underlying storage resource when pod & the its persistent volume are no longer required.

Example of a configuration file for a StorageClass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: managed-premium-retain
provisioner: kubernetes.io/azure-disk
reclaimPolicy: Retain
parameters:
  storageaccounttype: Premium_LRS
  kind: Managed

Reclaim Policy

Here is a breakdown of what Reclaim Policies:

  • Retain –
    • Allows for manual reclamation of the resource. The PV is not available for another claim due to previous claimant’s data remaining on the volume. A K8s admin must manually reclaim the volume.
    • Delete –
      • The delete reclaim policy removes the PV resource from the K8s cluster, & the associated storage asset such as cloud storage, NFS etc…
    • Recycle –
      • Performs a basic scrub on the volume & makes it available again for a new PVC.

Persistent Volumes (PVs)

Here is a breakdown of what Persistent Volumes are:

  • A persistent volume (PV) is a storage resource created and managed by the Kubernetes API that can exist beyond the lifetime of an individual pod.
    • A Persistent Volume can be manually provisioned by an Kubernetes admin or dynamically provisioned using Storage Classes by the Kubernetes API server.
    • Dynamic provisioning uses a StorageClass to identify what type of storage (NFS, iSCSI, or cloud-based) needs to be created.

Example of a configuration file for the PersistentVolume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0010
spec:
  capacity:
   storage: 40Gi
  volumeMode: Filesystem
  accessModes:
   - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
   - hard
   - nfsvers=4.1
  nfs:
   path: /tmp
   server: 172.19.0.22

Persistent Volume Claims (PVCs)

Here is a breakdown of what Persistent Volumes Claims are:

  • A PersistentVolumeClaim (PVC) is a request for storage by a user.
    • A PersistentVolumeClaim specifies the volume mode of either Block or File storage from a StorageClass, the access mode, and the capacity needed.
    • PVC Access Modes Are:
      • ReadOnlyMany (ROX) allows being mounted by multiple nodes in read-only mode.
      • ReadWriteOnce (RWO) allows being mounted by a single node in read-write mode.
      • ReadWriteMany (RWX) allows multiple nodes to be mounted in read-write mode.

Example of a configuration file for the PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc0002
spec:
  storageClassName: manual
  accessModes:
   - ReadWriteOnce
  resources:
   requests:
    storage: 10Gi

Lifecycle of a Volume & Claim

Let’s take a look at how the lifecycle of volumes and claims flow:

StatefulSets

Here is a breakdown of what Stateful Sets are:

  • StaefulSets are Kubernetes objects that are used when we need each pod to have its own independent state & use its own individual volume.
    • With StatefulSets each pod is assigned a unique name & the unique name stays with it even if the pod is deleted & recreated.
    • Headless services are primarily used when we deploy statefulset applications. Headless services don’t operate like load balancers. Headless services are not assigned IPs like a regular service is.

StatefulSets are typically used when the following is needed:

  • unique network identifiers for pods
    • persistent storage for retaining data
    • Ordered, graceful deployment, & scaling of pods
    • Ordered, & automated rolling updates of the app

Some Good Practices When Running Stateful Apps on Kubernetes

That wraps up this blog post! Thanks for reading and stay tuned to my blog for more content on Kubernetes soon.

Read more