I am honored to be a guest next Wednesday, August 26th on the “Tech Talk Wednesday” podcast and radio show with Kazeem Adegboyega (@KazeemCanTeach)! We will be chatting about Azure Kubernetes Service (AKS).
This show will be streamed online via Microsoft Teams and will air in Lagos, Nigeria on Lagos State University (LASU) radio (95.7)!
One of my goals is to help spread knowledge about tech in Africa and showcase African technologists in the US. This is the first step in that journey.
Microsoft has been making some amazing enhancements to AKS and in the open-source space in general. This effort has been making it easier to use Kubernetes and easier for folks who are getting started with Kubernetes.
Recently Microsoft has added more functionally called “Kubernetes resource view“.
Kubernetes resource view in Azure Portal
This allows you to see and work with some Kubernetes resources directly in the Azure portal. As you can see in the previous screenshot it includes Namespaces, Workloads, and Services. When you deploy a new AKS cluster this is enabled by default.
If you have deployed an AKS cluster before this functionality was release you will need to enable the Kubernetes resource view. You can choose what namespace to enable this on. It will look like this:
Enable Kubernetes resource view button
The three main areas of resources are:
Namespaces
Workloads
Deployments
Pods
Replica Sets
Daemon sets
Services and ingresses
In these resource areas, you can view the resources, add, delete, and show labels.
You can click on a resource to see the properties of it under Overview. The overview tab has valuable information for example for a pod you can see the pod status, the containers that belong to it, its conditions, and more. Here are some screenshots:
–
You can see any events around the resource and you can even view or edit the resources Yaml. Here is what it looks like when editing a resource:
Well, this was a quick blog post to give an early look at the new Kubernetes resource view in AKS. I recommend you check out it! Remember this is a preview and it’s going to get better and better.
I can imagine in the future we will be able to access more Kubernetes resources and API Objects in the Azure portal. For example, it will be cool to be able to work with Secrets, and Configmaps right in the Azure portal! I don’t know about you, but I am very excited about what Microsoft has been doing with AKS!
I recently had the honor of being a guest on the “Lisa at the Edge” Podcast. Lisa is a Microsoft Hybrid Cloud Strategist and an influencer in the hybrid cloud community based out of Scotland. She runs a blog and this year she started a popular podcast.
On Lisa’s podcast, she covers Careers in Tech and Microsoft Hybrid Cloud and a range of other topics with experts across the tech community.
This is an episode you don’t want to miss. This was one of the most entertaining podcasts I have been on. It took some interesting turns in regards to topics and very engaging. In the podcast episode Lisa and I talk about:
Evolving your career as technology evolves
Transformation of IT dept to Strategic Business Partner
When working with
Containers a common need is to store Container images somewhere. Container
Registries are the go-to for this. Docker hub is an example of a Container
Registry and it is the most well-known Container Registry.
What is a Container Registry?
A Container Registry is a group of repositories used to store container images. A container repository is used to manage, pull or push container images. A Container Registry does more than a repository in that it has API paths, tasks, scanning for vulnerabilities, digital signature of images, access control rules and more.
Container registries can be public or private. For example, a public registry is Docker Hub and anyone can access its container repositories to pull images. A private registry is one that you would host either on-premises or on a cloud provider. All of the major cloud providers including Azure has a Container Registry offering.
Integrate ACR with AKS
With AKS it is a good idea to use a private container registry to host your container images. The process is used Docker to build your image>push the image to your Azure Container Registry>Pull the image from the registry when deploying a Pod to your AKS cluster.
There are 3 ways to
integrate AKS with Azure Container Registry. I typically only use one way and
will focus on that in this blog post.
2 of the ways you can integrate AKS with Azure Container Registry. The first is through an Azure AD service principal name (SPN) that assigns the AcrPull role to the SPN. More on this here. You would use this first way in scenarios where you only have one ACR and this will be the default place to pull images from.
The second is to create a Kubernetes ServiceAccount that would be used to pull images when deploying pods. With this you would add “kind: ServiceAccount” to your Kubernetes cluster and it would use the ACR credentials. Then in your pods yaml files you would need to specify the service account for example “serviceAccountName: ExampleServiceAccountName”.
The way I like to integrate AKS with Azure Container Registry is to use Kubernetes Secret of type docker-registry. With this option basically, you create a secret in the Kubernetes cluster for your Azure Container Registry. You then specify the secret in your pod yaml files. This allows you to have multiple container registries to pull from. This option is also quick and easy to setup. Ok.
To get started you need to build your Docker image and push it up to your Azure Container Registry. In this blog post, I will not cover deploying ACR, or building the Docker image assuming you have already done these things. Now let’s set up the ACR and AKS integration using a docker-registry Kubernetes secret.
1. For the first step, you will need the credentials to your Azure Container Registry. To get this go navigate to:
2. The second step push your Docker image up to your ACR.
# Log into the Azure Container Registry
docker login ACRNAMEHERE.azurecr.io -u ACRUSERNAMEHERE -p PASSWORDHERE
# Tag the docker image with ACR
docker tag DOCKERIMAGENAMEHERE ACRNAMEHERE.azurecr.io/DOCKERIMAGENAMEHERE:v1
# Push the image to ACR
docker push ACRNAMEHERE.azurecr.io/DOCKERIMAGENAMEHERE:v1
3. The third step create the docker-registry Kubernetes secret by running following syntax from Azure Cloud Shell:
In Kubernetes, you have a container or containers running as a pod. In front of the pods, you have something known as a service. Services are simply an abstraction that defines a logical set of pods and how to access them. As pods move around the service that defines the pods it is bound to keeps track of what nodes the pods are running on. For external access to services, there is typically an Ingress controller that allows access from outside of the Kubernetes cluster to a service. An ingress defines the rules for inbound connections.
Microsoft has had an
Application Gateway Ingress Controller for Azure Kubernetes Service AKS in
public preview for some time and recently released for GA. The Application
Gateway Ingress Controller (AGIC) monitors the Kubernetes cluster for ingress
resources and makes changes to the specified Application Gateway to allow
inbound connections.
This allows you to leverage the Application Gateway service in Azure as the entry into your AKS cluster. In addition to utilizing the Application Gateway standard set of functionality, the AGIC uses the Application Gateway Web Application Firewall (WAF). In fact, that is the only version of the Application Gateway that is supported by the AGIC. The great thing about this is that you can put Application Gateways WAF protection in front of your applications that are running on AKS.
This blog post is not a detailed deep dive into AGIC. To learn more about AGIC visit this link: https://azure.github.io/application-gateway-kubernetes-ingress. In this blog post, I want to share a script I built that deploys the AGIC. There are many steps to deploying the AGIC and I figured this is something folks will need to deploy over and over so it makes sense to make it a little easier to do. You won’t have to worry about creating a managed identity, getting various id’s, downloading and updating YAML files, or installing helm charts. Also, this script will be useful if you are not familiar with sed and helm commands. It combines PowerShell, AZ CLI, sed, and helm code. I have already used this script about 10 times myself to deploy the AGIC and boy has it saved me time. I thought it would be useful to someone out there and wanted to share it.
I typically deploy RBAC enabled AKS clusters so this script is set up to work with an RBAC enabled AKS cluster. If you are deploying AGIC for a non-RBAC AKS cluster be sure to view the notes in the script and adjust a couple of lines of code to make it non-RBAC ready. Also note this AGIC script is focused on brownfield deployments so before running the script there are some components you should already have deployed. These components are:
VNet and 2 Subnets (one for your AKS cluster and one for the App Gateway)
AKS Cluster
Public IP
Application Gateway
The script will
deploy and do the following:
Deploys the AAD Pod Identity.
Creates the Managed Identity used by the AAD Pod Identity.
Gives the Managed Identity Contributor access to Application Gateway.
Gives the Managed Identity Reader access to the resource group that hosts the Application Gateway.
Downloads and renames the sample-helm-config.yaml file to helm-agic-config.yaml.
Updates the helm-agic-config.yaml with environment variables and sets RBAC enabled to true using Sed.
Adds the Application Gateway ingress helm chart repo and updates the repo on your AKS cluster.
Installs the AGIC pod using a helm chart and environment variables in the helm-agic-config.yaml file.
Now let’s take a look at running the script. It is recommended to upload to and run this script from Azure Cloud shell (PowerShell). Run:
./AGICDeployment.ps1 -verbose
You will be prompted
for the following as shown in the screenshot:
Enter the name of the Azure Subscription you want to use.:
Enter the name of the Resource Group that contains the AKS Cluster.:
Enter the name of the AKS Cluster you want to use.:
Enter the name of the new Managed Identity.:
Here is a screenshot
of what you will see while the script runs.
That’s it. You don’t have to do anything else except entering values at the beginning of running the script. To verify your new AGIC pod is running you can check a couple of things. First, run:
kubectl get pods
Note the name of my
AGIC pod is appgw-ingress-azure-6cc9846c47-f7tqn.
Your pod name will be different.
Now you can check
the logs of the AGIC pod by running:
kubectl logs appgw-ingress-azure-6cc9846c47-f7tqn
You should not have
any errors but if you do they will show in the log. If everything ran fine the
output log should look similar to:
After its all said and done you will have a running Application Gateway Ingress Controller that is connected to the Application Gateway and ready for new ingresses.
This script does not deploy any ingress into your AKS cluster. That will need to be done in addition to this script as you need. The following is an example YAML code for an ingress. You can use this to create an ingress for a pod running in your AKS cluster.
These days the growth of Kubernetes is on fire! Azure Kubernetes Service (AKS) Microsoft’s managed Kubernetes offering is one of the fastest-growing products in the Azure portfolio of cloud services with no signs of slowing down. For some time me and two fellow Microsoft MVPs Janaka Rangama (@JanakaRangama) and Ned Bellavance (@Ned1313) have been working hard on an Azure Kubernetes Service (AKS) book. We are excited that the book has been finished and is currently in production. The publisher Apress plans to publish it on December 28th, 2019.
Besides my co-authors, we had additional rock stars to help with this project. For the Tech Review, we had the honor to work with Mike Pfeiffer (@mike_pfeiffer) Microsoft MVP, Author, Speaker, CloudSkills.fm podcast and Keiko Harada (@keikomsft) Senior Program Manager – Azure Compute – Containers. Shout out to them and huge thanks for being a part of this!
We also had the honor of the foreword being written by Brendan Burns (@brendandburns) Distinguished Engineer at Microsoft and co-founder of Kubernetes. A shout out to him and a world of thanks for taking the time to help with this project!
In this book, we take a journey inside Docker containers, container registries, Kubernetes architecture, Kubernetes components, and core Kubectl commands. We then dive into topics around Azure Container Registry, Rancher for Kubernetes management, deep dive into AKS, package management with HELM, and using AKS in CI/CD with Azure DevOps. The goal of this book is to give the reader just enough theory and lots of practical straightforward knowledge needed to start running your own AKS cluster.
For anyone looking to work with Azure Kubernetes Service or already working with it, this book is for you! We hope you get a copy and it becomes a great tool you can use on your Kubernetes journey.
Azure Kubernetes Service (AKS) service Azure App Service Environment (ASE) Azure Service Fabric (ASF) Comparison
Scenario:
So, your team recently has been tasked with developing a new application and running it. The team made the decision to take a microservices based approach to the application. Your team also has decided to utilize Docker containers and Azure as a cloud platform. Great, now it’s time to move forward right? Not so fast. There is no question that Docker containers will be used, but what is in question is where you will run the containers. In Azure containers can run on Azure’s managed Kubernetes (AKS) service, an App Service Plan on Azure App Service Environment (ASE), or Azure Service Fabric (ASF). Let’s look at each one of these Azure services including an overview, pro’s, cons, and pricing.
This Azure Kubernetes Service (AKS) Pros and Cons chart is clickable.This Azure App Service Environment (ASE) Pros and Cons chart is clickable.This Azure Service Fabric (ASF) Pros and Cons chart is clickable.
Conclusion:
Choose Azure Kubernetes Service if you need more control, want to avoid vendor lock-in (can run on Azure, AWS, GCP, on-prem), need features of a full orchestration system, flexibility of auto scale configurations, need deeper monitoring, flexibility with networking, public IP’s, DNS, SSL, need a rich ecosystem of addons, will have many multi-container deployments, and plan to run a large number of containers. Also, this is a low cost.
Choose Azure App Service Environment if don’t need as much control, want a dedicated SLA, don’t need deep monitoring or control of the underlying server infrastructure, want to leverage features such as deployment slots, green/blue deployments, will have simple and a low number of multi-container deployments via Docker compose, and plan to run a smaller number of containers. Regarding cost, running a containerized application in an App Service Plan in ASE tends to be more expensive compared to running in AKS or Service Fabric. The higher cost of running containers on ASE is because with an App Service Plan on ASE, you are paying costs for a combination of resources and the managed service. With AKS and ASF you are only paying for the resources used.
Choose Service Fabric if you want a full micros services platform, need flexibility now or in the future to run in cloud and or on-premises, will run native code in addition to containers, want automatic load balancing, low cost.
A huge thanks to my colleague Sunny Singh (@sunnys101) for giving his input and reviewing this post. Thanks for reading and check back for more Azure and container contents soon.
Part of running Kubernetes is being able to
monitoring the cluster, the nodes, and the workloads running in it. Running
production workloads regardless of PaaS, VM’s, or containers requires a solid
level of reliability. Azure Kubernetes Service comes with monitoring provided
from Azure bundled with the semi-managed service. Kubernetes also has built in
monitoring that can also be utilized.
It is important to note that AKS is a free
service and Microsoft aims to achieve at least 99.5% availability for the
Kubernetes API server on the master node side.
But due to AKS being a free service Microsoft
does not carry an SLA on the Kubernetes cluster service itself. Microsoft does
provide an SLA for the availability of the underlying nodes in the cluster via
the Azure Virtual Machines SLA. Without an official SLA for the Kubernetes
cluster service it becomes even more critical to understand your deployment and
have the right monitoring tooling and plan in place so when an issue arises the
DevOps or CloudOps team can address, investigate, and resolve any issues with the
cluster.
The monitoring service included with AKS
gives you monitoring from two perspectives including the first one being
directly from an AKS cluster and the second one being all AKS clusters in a
subscription. The monitoring looks at two key areas “Health status”
and “Performance charts” and consists of:
Insights – Monitoring for the
Kubernetes cluster and containers.
Metrics – Metric based
cluster and pod charts.
Log Analytics – K8s and Container
logs viewing and search.
Azure Monitor
Azure Monitor has a containers section. Here
is where you will find a health summary across all clusters in a subscription
including ACS. You also will see how many nodes and system/user pods a cluster
has and if there are any health issues with the a node or pod. If you click on
a cluster from here it will bring you to the Insights section on the AKS
cluster itself.
If you click on an AKS cluster you will be
brought to the Insights section of AKS monitoring on the actual AKS cluster.
From here you can access the Metrics section and the Logs section as well as
shown in the following screenshot.
Insights
Insights is where you will find the bulk of
useful data when it comes to monitoring AKS. Within Insights you have these 4
areas Cluster, Nodes, Controllers, and Containers. Let’s take a deeper look
into each of the 4 areas.
Cluster
The cluster page contains charts with key
performance metrics for your AKS clusters health. It has performance charts for
your node count with status, pod count with status, along with aggregated node
memory and CPU utilization across the cluster. In here you can change the date
range and add filters to scope down to specific information you want to see.
Nodes
After clicking on the nodes tab you will see
the nodes running in your AKS cluster along with uptime, amount of pods on the
node, CPU usage, memory working set, and memory RSS. You can click on the arrow
next to a node to expand it displaying the pods that are running on it.
What you will notice is that when you click
on a node, or pod a property pane will be shown on the right hand side with the
properties of the selected object. An example of a node is shown in the
following screenshot.
Controllers
Click on the Controllers tab to see the
health of the clusters controllers. Again here you will see CPU usage, memory
working set, and memory RSS of each controller and what is running a
controller. As an example shown in the following screenshot you can see the
kubernetes dashboard pod running on the kubernetes-dashboard controller.
The properties of the kubernetes dashboard pod
as shown in the following screenshot gives you information like the pod name,
pod status, Uid, label and more.
You can drill in to see the container the pod
was deployed using.
Containers
On the Containers tab is where all the
containers in the AKS cluster are displayed. An as with the other tabs you can
see CPU usage, memory working set, and memory RSS. You also will see status,
the pod it is part of, the node its running on, its uptime and if it has had
any restarts. In the following screenshot the CPU usage metric filter is used
and I am showing a containers that has restarted 71 times indicating an issue
with that container.
In the
following screenshot the memory working set metric filter is shown.
You can also filter the
containers that will be shown through using the searching by name filter.
You also can see a containers logs in the containers tab. To do this select a container to show its properties. Within the properties you can click on View container live logs (preview) as shown in the following screenshot or View container logs. Container log data is collected every three minutes. STDOUT and STDERR is the log output from each Docker container that is sent to Log Analytics.
Kube-system is not currently collected and sent to Log Analytics. If you are not familiar with Docker logs more information on STDOUT and STDERR can be found on this Docker logging article here: https://docs.docker.com/config/containers/logging.
In this blog post I am going to walk through the steps for deploying WordPress to Azure Kubernetes Service (AKS) using MySQL and WordPress Docker images. Note that using the way I will show you is one way. Another way to deploy WordPress to AKS would be using a Helm Chart. Here is a link to the WordPress Helm Chart by Bitnami https://bitnami.com/stack/wordpress/helm. Here are the images we will use in this blog post:
The first thing we need to do is save these files as mysql-deployment.yaml and wordpress-deployment.yaml respectively.
Next, we need to setup a password for our MySQL DB. We will do this by creating a secret on our K8s cluster. To do this launch the bash or PowerShell in Azure cloud shell like in the following screenshot and run the following syntax:
NOTE: You could use kubectl create /home/steve/mysql-deployment.yaml instead of apply to create the MySQL pod and service. I use apply because I typically use the declarative object configuration approach. kubectl apply essentially equals kubectl create + kubectl replace. In order to update an object after it has been created using kubectl create you would need to run kubectl replace.
There are pros and cons to using each and it is more of a preference for example when using the declarative approach there is no audit trail associated with changes. For more information on the multiple Kubernetes Object Management approaches go here: https://kubernetes.io/docs/concepts/overview/object-management-kubectl/overview.
Note that in the mysql yaml file it has syntax to create a persistent volume. This is needed so that the database stays in tact even if the pod fails, is moved etc. You can check to ensure the persistent volume was created by running the following syntax:
kubectl get pvc
Also, you can run the following syntax to verify the mysql pod is running:
kubectl get pods
Deploying the WordPress Pod and service is the same process. Use the following syntax to create the WordPress pod and service:
Again, check to ensure the persistent volume was created. Use the following syntax:
kubectl get pvc
NOTE: When checking right after you created the persistent volume it may be in a pending status for a while like shown in the following screenshot:
You can also check the persistent volume using the K8s dashboard as shown in the following screenshot:
With the deployment of MySQL and WordPress we created 2 services. The MySQL service has a clusterip that can only be accessed internally. The WordPress service has an external IP that is also attached to an Azure Load Balancer for external access. I am not going to expand on what Kubernetes services are in this blog post but know that they are typically used as an abstracted layer in K8s used for access to Pods on the backend and follow the Pods regardless of the node they are running on. For more information about Kubernetes services visit this link: https://kubernetes.io/docs/concepts/services-networking/service.
In order to see that the services are running properly and find out the external IP you can run the following syntax:
kubectl get services (to see all services)
or
kubectl get services wordpress (to see just the WordPress service)
You also can view the services in the K8s dashboard as shown in the following screenshot:
Well now that we have verified the pods and the services are running let’s check out our new WordPress instance by going to the external IP in a web browser.
Thanks for checking out this blog post. I hope this was an easy to use guide to get WordPress up and running on your Azure Kubernetes Service cluster. Check back soon for more Azure and Kubernetes/Container content.
In this blog post I am going to walk through the setup of an AKS cluster step by step. This is to serve as a intro to AKS to show how easy it is to get started with Kubernetes in Azure. In a follow up blog post I will dive into AKS more showing how … Read more