Coming up soon I will be giving my first Pluralsight Author Talk. This live session will be on GitOps. I will cover a fundamental understanding of GitOps, the need for GitOps, GitOps architecture, GitOps workflow, GitOps principles, practices, & tooling such as Flux, Argo CD, AND Jenkins X.
Today Pluralsight published 1 of 2 Azure Arc courses I am building. This marks my 10th course on Pluralsight! This first course is Azure Arc enabled Kubernetes: Getting Started. Azure Arc is a key service in the Azure story extending Azure to the on-premises data center and or multiple clouds outside of Azure.
This course is just under 2 hours and packed full of information & demos to help you get started with the topic. I go into a deeper understanding of the multi-cloud market, Kubernetes in the enterprise, and Azure Arc enabled Kubernetes. Azure Arc enabled Kubernetes architecture. Setting up and using Azure Arc enabled Kubernetes. And using GitOps with Azure Arc enabled Kubernetes.
I am excited about releasing this course for several reasons: #1 Azure Arc is a newer technology from Microsoft and I am happy to share my knowledge about it, #2 This one is a combination of Azure Arc, Kubernetes, & GitOps all technologies I have been working with regularly, #3 Azure is growing, Kubernetes is growing, and multi-cloud is growing all in the enterprise and this course covers all three of these.
Here is the description of the course:
Managing Kubernetes clusters across on-premises and multiple clouds can be disjointed and overly complicated. In this course, Azure Arc Enabled Kubernetes: Getting Started, you’ll learn to how to manage external Kubernetes clusters with Azure Arc. First, you’ll explore what Azure Arc k8s is and how to use it. Next, you’ll discover the features of Azure Arc K8s and how to use them. Finally, you’ll learn how to how to use Azure Arc K8s and GitOps to deploy applications. When you’re finished with this course, you’ll have the skills and knowledge of Azure Arc enabled Kubernetes needed to manage Kubernetes clusters across on-premises and multiple clouds.
Here are the areas and topics of the course:
Understanding Azure Arc Enabled Kubernetes
Intro and Topics
Understanding Azure Arc Enabled Kubernetes
Understanding Azure Arc Enabled Kubernetes Use Cases
This course is also a part of the all new Azure Arc path titled “Managing Environments with Azure Arc” on Pluralsight. There are other courses in the path already such as Azure Arc: The Big Picture, Azure Arc-enabled Data Services: The Big Picture, and Azure Arc and Azure Lighthouse: First Look and many more Azure Arc courses on the way.
I hope you find value in this new Azure Arc enabled Kubernetes: Getting Started course. Be sure to follow my profile on Pluralsight so you will be notified as I release new courses including my second Azure Arc related course!
Today I went on “Tech Talk Wednesday” a podcast and radio show with Kazeem Adegboyega The topic was “30 Minutes of Azure Kubernetes Services (AKS)“. It streamed online via Microsoft Teams and aired in Lagos, Nigeria on Lagos State University (LASU) radio (95.7).
I had a great time talking with Kazeem! Even Sam Erskine made a guest appearance. If you missed the live show you can watch it on YouTube:
I am honored to be a guest next Wednesday, August 26th on the “Tech Talk Wednesday” podcast and radio show with Kazeem Adegboyega (@KazeemCanTeach)! We will be chatting about Azure Kubernetes Service (AKS).
This show will be streamed online via Microsoft Teams and will air in Lagos, Nigeria on Lagos State University (LASU) radio (95.7)!
One of my goals is to help spread knowledge about tech in Africa and showcase African technologists in the US. This is the first step in that journey.
Today I passed the Docker Certified Associate exam! In this post, I will share some details about the exam and the resources I used to study for it.
The certification is good for 2 years after passing the exam. It demonstrates that you have foundational real-world Docker skills. It is multiple choice with 55 questions, you have 1.5 hours to finish it and costs $195 USD. It is recommended that you have 6 to 12 months of hands-on experience with Docker before taking the exam. You can read about and sign up for the exam here: https://success.docker.com/certification.
I have had some folks ask me why I would waste my time taking the Docker exam. They say to focus on Kubernetes and Open Shift instead of Docker. Lets talk about why I chose to pursue the Docker certification. First off you have to run containers on those orchestration platforms mentioned before and chances are you will run Docker containers on them. Therefore before diving into an orchestration platform it is important to be knowledgable on containers. Also, I have seen many scenarios in the cloud where it makes sense to run containers directly on the cloud platform itself and again chances are those will be Docker containers. Docker is still a leader in the container space. There are several reports and articles that point to this. Here are some of the reports and articles backing this up:
Docker listed as the leader in the “Container Tools Used” section of the RightScale 2019 State of the Cloud Report” here:
Docker is listed as #2 with 31.35% market share on Datanyze 2020 “Containerization Market Share Competitor Analysis Report” here:
I will call out that the Docker exam covers Swarm mode orchestration platform that is included with Docker. Swarm mode is a lot easier to learn and use compared to Kubernetes however, Kubernetes has won the orchestration platform war. It would be nice if Docker would revamp the exam reducing or removing Swarm and replace with some Kubernetes objectives. This would make more sense because there is a strong chance Swarm will not be used in the real world.
The Docker exam was not an easy exam and you definitely want to have some hands-on with Docker before taking it. There are a ton of resources out there that you can leverage beyond hands-on to assist in your study for this certification. There are many books available. You can do a quick search on Amazon and check the reviews for one that would be a good fit for you. I have read a couple of books on Docker and have co-authored a book on AKS with a chapter dedicated to Docker in it.
Here is the list of what I used to study.
Free Hands on Docker labs (This resource was huge for me. It gave me environments to use and scenarios for training with Docker and Docker Swarm mode.):
I attended a “Docker JumpStart Virtual Workshop” by Microsoft MVP Mike Pfeiffer and Microsoft MVP/Docker Captain Dan Wahlin (This workshop ocurred in the past but I beleive you can sign up and watch the recordings from the workshop.):
Free Docker Certification review questions here (This blogger has a bunch of review questions to help you get in the right mindset. They cover all the exam areas.):
Docker courses and learning checks on Pluralsight (The courses are great. I found the learning checks very useful becuase it was a good way to check my knowledge in all of the exam areas.):
Spent time working with Docker on some projects (self explanatory).
Overall the Docker certification is a good move for your career as an IT Pro, developer, if you work in DevOps, and with cloud. I definitely recommend getting this certification. If you decide to go after it good luck!
Stay tuned for more blog posts with insights on certifications in the future.
This will be a short
post and this one is mostly for me so I can easily find this information in the
future when I need it again. 🙂
Recently I was
containerizing some PHP websites that use Composer. If you are not familiar
with Composer but you are working with PHP, you will run across it at some
point. Composer is a dependency package manager for PHP. Composer manages
(install/update) any required libraries and dependencies needed for your PHP
site.
To use Composer you must first declare the libraries and dependencies in a composer.json file in your site directory and then you would run Composer and it will do its magic. For more information on Composer visit: https://getcomposer.org/doc/00-intro.md
Back to my task, I
needed to install Composer in the containers I was building and run it to
install all the dependencies. I needed these actions in the Dockerfile so it
would all happen during the container build. After some research on Composer I
was able to pull something together. Here is the syntax that I ended up putting
in the Dockerfile:
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Set working directory for composer (Contains the composer.json file)
WORKDIR /var/www/html/sitename
# Run Composer
RUN composer install
Note: I placed the above code at the end of the
Dockerfile ensuring Apache, PHP etc was all in place first.
These days the growth of Kubernetes is on fire! Azure Kubernetes Service (AKS) Microsoft’s managed Kubernetes offering is one of the fastest-growing products in the Azure portfolio of cloud services with no signs of slowing down. For some time me and two fellow Microsoft MVPs Janaka Rangama (@JanakaRangama) and Ned Bellavance (@Ned1313) have been working hard on an Azure Kubernetes Service (AKS) book. We are excited that the book has been finished and is currently in production. The publisher Apress plans to publish it on December 28th, 2019.
Besides my co-authors, we had additional rock stars to help with this project. For the Tech Review, we had the honor to work with Mike Pfeiffer (@mike_pfeiffer) Microsoft MVP, Author, Speaker, CloudSkills.fm podcast and Keiko Harada (@keikomsft) Senior Program Manager – Azure Compute – Containers. Shout out to them and huge thanks for being a part of this!
We also had the honor of the foreword being written by Brendan Burns (@brendandburns) Distinguished Engineer at Microsoft and co-founder of Kubernetes. A shout out to him and a world of thanks for taking the time to help with this project!
In this book, we take a journey inside Docker containers, container registries, Kubernetes architecture, Kubernetes components, and core Kubectl commands. We then dive into topics around Azure Container Registry, Rancher for Kubernetes management, deep dive into AKS, package management with HELM, and using AKS in CI/CD with Azure DevOps. The goal of this book is to give the reader just enough theory and lots of practical straightforward knowledge needed to start running your own AKS cluster.
For anyone looking to work with Azure Kubernetes Service or already working with it, this book is for you! We hope you get a copy and it becomes a great tool you can use on your Kubernetes journey.
I want to share here about Docker training I will be attending later this month June 24th/25th, 2019. It is a Docker JumpStart Virtual Workshop. I am excited about this training because it will be delivered by a fellow Microsoft MVP’s Dan Wahlin and Mike Pfeiffer. Also Dan Wahlin is a Docker Captain.
For those that don’t know a Docker Captain is like a Microsoft MVP but for Docker. There will even be some Kubernetes covered on day 2. This is shaping up to be some great training.
As of now there is still room in this class and its less than $300 USD! If you have wanted to get up to speed on Docker this is a good low cost way to do it. Here is a link to sign up: Docker JumpStart Workshop
Here is what will be covered across the 2 days (from the training website):
Azure Kubernetes Service (AKS) service Azure App Service Environment (ASE) Azure Service Fabric (ASF) Comparison
Scenario:
So, your team recently has been tasked with developing a new application and running it. The team made the decision to take a microservices based approach to the application. Your team also has decided to utilize Docker containers and Azure as a cloud platform. Great, now it’s time to move forward right? Not so fast. There is no question that Docker containers will be used, but what is in question is where you will run the containers. In Azure containers can run on Azure’s managed Kubernetes (AKS) service, an App Service Plan on Azure App Service Environment (ASE), or Azure Service Fabric (ASF). Let’s look at each one of these Azure services including an overview, pro’s, cons, and pricing.
This Azure Kubernetes Service (AKS) Pros and Cons chart is clickable.This Azure App Service Environment (ASE) Pros and Cons chart is clickable.This Azure Service Fabric (ASF) Pros and Cons chart is clickable.
Conclusion:
Choose Azure Kubernetes Service if you need more control, want to avoid vendor lock-in (can run on Azure, AWS, GCP, on-prem), need features of a full orchestration system, flexibility of auto scale configurations, need deeper monitoring, flexibility with networking, public IP’s, DNS, SSL, need a rich ecosystem of addons, will have many multi-container deployments, and plan to run a large number of containers. Also, this is a low cost.
Choose Azure App Service Environment if don’t need as much control, want a dedicated SLA, don’t need deep monitoring or control of the underlying server infrastructure, want to leverage features such as deployment slots, green/blue deployments, will have simple and a low number of multi-container deployments via Docker compose, and plan to run a smaller number of containers. Regarding cost, running a containerized application in an App Service Plan in ASE tends to be more expensive compared to running in AKS or Service Fabric. The higher cost of running containers on ASE is because with an App Service Plan on ASE, you are paying costs for a combination of resources and the managed service. With AKS and ASF you are only paying for the resources used.
Choose Service Fabric if you want a full micros services platform, need flexibility now or in the future to run in cloud and or on-premises, will run native code in addition to containers, want automatic load balancing, low cost.
A huge thanks to my colleague Sunny Singh (@sunnys101) for giving his input and reviewing this post. Thanks for reading and check back for more Azure and container contents soon.
Part of running Kubernetes is being able to
monitoring the cluster, the nodes, and the workloads running in it. Running
production workloads regardless of PaaS, VM’s, or containers requires a solid
level of reliability. Azure Kubernetes Service comes with monitoring provided
from Azure bundled with the semi-managed service. Kubernetes also has built in
monitoring that can also be utilized.
It is important to note that AKS is a free
service and Microsoft aims to achieve at least 99.5% availability for the
Kubernetes API server on the master node side.
But due to AKS being a free service Microsoft
does not carry an SLA on the Kubernetes cluster service itself. Microsoft does
provide an SLA for the availability of the underlying nodes in the cluster via
the Azure Virtual Machines SLA. Without an official SLA for the Kubernetes
cluster service it becomes even more critical to understand your deployment and
have the right monitoring tooling and plan in place so when an issue arises the
DevOps or CloudOps team can address, investigate, and resolve any issues with the
cluster.
The monitoring service included with AKS
gives you monitoring from two perspectives including the first one being
directly from an AKS cluster and the second one being all AKS clusters in a
subscription. The monitoring looks at two key areas “Health status”
and “Performance charts” and consists of:
Insights – Monitoring for the
Kubernetes cluster and containers.
Metrics – Metric based
cluster and pod charts.
Log Analytics – K8s and Container
logs viewing and search.
Azure Monitor
Azure Monitor has a containers section. Here
is where you will find a health summary across all clusters in a subscription
including ACS. You also will see how many nodes and system/user pods a cluster
has and if there are any health issues with the a node or pod. If you click on
a cluster from here it will bring you to the Insights section on the AKS
cluster itself.
If you click on an AKS cluster you will be
brought to the Insights section of AKS monitoring on the actual AKS cluster.
From here you can access the Metrics section and the Logs section as well as
shown in the following screenshot.
Insights
Insights is where you will find the bulk of
useful data when it comes to monitoring AKS. Within Insights you have these 4
areas Cluster, Nodes, Controllers, and Containers. Let’s take a deeper look
into each of the 4 areas.
Cluster
The cluster page contains charts with key
performance metrics for your AKS clusters health. It has performance charts for
your node count with status, pod count with status, along with aggregated node
memory and CPU utilization across the cluster. In here you can change the date
range and add filters to scope down to specific information you want to see.
Nodes
After clicking on the nodes tab you will see
the nodes running in your AKS cluster along with uptime, amount of pods on the
node, CPU usage, memory working set, and memory RSS. You can click on the arrow
next to a node to expand it displaying the pods that are running on it.
What you will notice is that when you click
on a node, or pod a property pane will be shown on the right hand side with the
properties of the selected object. An example of a node is shown in the
following screenshot.
Controllers
Click on the Controllers tab to see the
health of the clusters controllers. Again here you will see CPU usage, memory
working set, and memory RSS of each controller and what is running a
controller. As an example shown in the following screenshot you can see the
kubernetes dashboard pod running on the kubernetes-dashboard controller.
The properties of the kubernetes dashboard pod
as shown in the following screenshot gives you information like the pod name,
pod status, Uid, label and more.
You can drill in to see the container the pod
was deployed using.
Containers
On the Containers tab is where all the
containers in the AKS cluster are displayed. An as with the other tabs you can
see CPU usage, memory working set, and memory RSS. You also will see status,
the pod it is part of, the node its running on, its uptime and if it has had
any restarts. In the following screenshot the CPU usage metric filter is used
and I am showing a containers that has restarted 71 times indicating an issue
with that container.
In the
following screenshot the memory working set metric filter is shown.
You can also filter the
containers that will be shown through using the searching by name filter.
You also can see a containers logs in the containers tab. To do this select a container to show its properties. Within the properties you can click on View container live logs (preview) as shown in the following screenshot or View container logs. Container log data is collected every three minutes. STDOUT and STDERR is the log output from each Docker container that is sent to Log Analytics.
Kube-system is not currently collected and sent to Log Analytics. If you are not familiar with Docker logs more information on STDOUT and STDERR can be found on this Docker logging article here: https://docs.docker.com/config/containers/logging.