My 30th Course: Google Firebase Studio Foundations (Vibe Coding)

Ive reached a milestone with my 30th course recently published on Pluralsight. This course is titled Google Firebase Studio Foundations. This was a course topic I suggested to the teams at Pluralsight since Vibe Coding is seeing so much growth and this solution is used for that. It is my 6th AI related course. Firebase Studio is Google’s full stack AI-powered development environment that streamlines the process of prototyping and building apps from idea to deployment.

In this course, Google Firebase Studio Foundations, you’ll start by learning the basics of vibe coding with Firebase Studio. First, you’ll explore how the Gemini AI Agent fits into the development workflow. Next, you’ll discover how to speed up backend, frontend, and mobile app development with AI assistance. Finally, you’ll take an app idea from concept to a working deployment on Firebase App Hosting. By the end of this course, you’ll have the skills needed to confidently use Firebase Studio to build and run modern apps.

I brought this topic forward because I was excited about the opportunity to author a course that showcases what Firebase Studio can do in the vibe coding space. I also wanted to raise awareness about the platform since it can be used for free, and developers can expand to a generous number of workspaces at no cost through a Google Developers account. I packed this course with demos as we work through vibe coding an app.

This course is ideal for beginners and aspiring developers who want to prototype, build and deploy apps with Google Firebase Studio. Ideal learners include students, early-stage founders, and tech professionals curious about AI-assisted development.

These are the topics in the course:

Get Started with Firebase Studio

  • Intro and Overview
  • Introduction to Vibe Coding
  • Introduction to Firebase Studio
  • Demo: Exploring Firebase Studio

Development with Firebase Studio

  • Intro and Overview
  • Accelerating Development with Vibe Coding
  • Demo: Generating a Full App with the Firebase Prototyper

From Idea to Running App with Firebase Studio

  • Vibe Code to Deployment
  • What Is Firebase App Hosting?
  • Deploying the App | 6m
  • Demo Part 1: Deploy App to Firebase App Hosting
  • Demo Part 2: Deploy App to Firebase App Hosting

If you need to build a web or mobile app, whether you know how to code or not, you will want to check out my new course here: https://www.pluralsight.com/courses/google-firebase-studio-foundations.

I hope this course serves as a valuable resource in your Vibe Coding, AI, and app building journey. Thank you for your continued support, and Be sure to follow my profile on Pluralsight so you will be notified as I release new courses

Here is the link to my Pluralsight profile to follow me:

https://www.pluralsight.com/authors/steve-buchanan


Update

I posted about this milestone on LinkedIn. Something really cool happened. The former CEO and founder of Pluralsight Aaron Skonnard commented on the post congratulating me. This means a lot coming from the founder of Pluralsight.

The link to the post is here if you want to check it out: https://www.linkedin.com/feed/update/urn:li:activity:7436863573412335617.

Read more

Bridging the Clouds: Back on RunAs Radio

It’s hard to believe, but it’s been a couple of years since I last sat down with Richard Campbell on RunAs Radio. Technology moves fast, but the cloud landscape has matured in ways that were only just beginning during my last visit.

I recently joined Richard for my third appearance on the show (Episode #1025) to talk about a challenge that is becoming the “new normal” for major SaaS providers: Expanding a Cloud-Native stack across multiple clouds.

From Single-Cloud Roots to Multi-Cloud Reality

At Jamf, we’ve built a powerful reputation for managing Apple devices at scale. Historically, our SaaS product was rooted in AWS. However, as our customer base grows, now serving over 70k+ customers worldwide the demand for flexibility grows with it.

In this episode, we discuss the journey of bringing those SaaS workloads to Azure and AKS. It isn’t just about “moving” code; it’s about architecting for consistency without losing the unique benefits of each cloud provider.

Kubernetes: The Common Ground (But Not the Whole Story)

One of the key takeaways from our chat is that while Kubernetes (AKS, EKS, GKE) provides the common operating system for the modern cloud, it isn’t a “magic wand” for multi-cloud.

To achieve true consistency, you have to look past the orchestrator and focus on the surrounding ecosystem. We dove into the complexities of:

  • IaC & Deployment: Why tools like OpenTofu are becoming essential for maintaining cloud-agnostic deployments.
  • Observability: Using Prometheus and Grafana to ensure that your SRE teams see the same data regardless of whether the backend is Azure or AWS.
  • Identity: Navigating the friction between different identity providers to ensure a seamless experience for the end user and how platforms like Okta support this.

The Docker & AI Connection

We couldn’t have a conversation in 2026 without touching on the elephant in the room: AI. As a Microsoft MVP focused on AKS and a Docker Captain, I’ve been watching closely how the Kubernetes and container ecosystem is evolving to support AI/ML workloads. Richard and I spent some time discussing how Docker, Inc. is positioning itself in this space and how developers can leverage these tools to build AI-ready applications without getting locked into a single vendor’s proprietary stack.

Reflections on a Maturing Landscape

Coming back to RunAs Radio for a third time allowed me to reflect on just how much our industry has shifted. We’ve moved past the “is the cloud safe?” phase and into the “how do we optimize for a multi-cloud world?” phase.

Whether you are a platform engineer, a developer, or a technical leader, the lessons I’ve learned at Accenture, Microsoft, helping startups, and now at Jamf while scaling across multple clouds are applicable to almost any modern enterprise.

You can listen to the full episode here: RunAs Radio #1025: SaaS on Multiple Clouds with Steve Buchanan

I’d love to hear your thoughts. Is your organization looking at multi-cloud for SaaS, or are you doubling down on a single provider?

Read more

Speaking at Open Source North 2025 on Multi-Cloud

I am excited to share that I will be speaking at this year’s Open Source North conference on May 29, 2025, at the University of St. Thomas in St. Paul.

This year, I’m teaming up with my fellow Jamf, Levi McCormick (Director of Engineering at Jamf), for a session that is very close to our daily reality: Multi-Cloud Without the Marketing or Designing for Multi-Cloud Without Losing Your Mind.

Why this talk? In the cloud industry, “Multi-Cloud”, “Cloud Native”, and “Iac via Terraform” are often sold as magic pills for redundancy, cost savings, unifaction and more across clouds. But for the people actually building and maintaining these systems, it can often feel like a recipe for complexity and technical debt.

At Jamf, Levi and I work on our infrastructure efforts across AWS, Azure, and GCP. We’ve learned—sometimes the hard way—what works, what doesn’t, and where the “hype” version of cloud differs from the “production” version. We wanted to build a session that focuses on the practical:

  • How to design for portability without over-engineering.
  • Managing identity, networking, and security across different providers.
  • Avoiding the “lowest common denominator” trap.
  • Keeping your sanity while managing three different clouds.

Open Source North is a great local event to the MN Tech scene because of the high-caliber community and the focus on real-world engineering. Whether you are a cloud veteran or just starting to look at a second provider, we’d love to see you there.

The Details:

If you’re attending, please connect on LinkedIn or find us after the session. We’d love to hear how your team is tackling these same challenges!

Read more

State of App Dev Report by Docker

As devs, platform engineers, and DevOps practitioners, we all feel it: the pace of change is relentless. New tools, new architectures, new expectations, and AI. It can be hard to separate where to invest our time from hype.

That’s exactly why I want decided to write this post about the 2025 Docker State of Application Development Report from Docker.

This report is not marketing fluff. It’s based on insights from over 4,500 developers and engineering professionals and offers a grounded snapshot of how application development is actually evolving today.

Although published in 2025, this report covers long-running trends that continue to shape modern application development. Areas like containerized workflows, cloud-based development environments, AI-assisted tooling, and shared responsibility for security evolve over time rather than changing overnight.

Referencing the 2025 report ahead of the 2026 release provides valuable context. It establishes a baseline for understanding where the industry is coming from, which patterns are proving durable, and which challenges continue to persist. I’ll be looking out for the 2026 report. If you havent checked it out the 2025 report yet you should.

As a Docker Captain, I strongly encourage you to read the full report. But first, here are some of the key takeaways that stood out to me:

Remote-First Development Is Becoming the New Normal

One of the biggest shifts in 2025 is how developers are working:

  • 64% of developers now use non-local development environments as their primary setup
  • Only 36% rely primarily on local machines

That’s a significant change from previous years, and it speaks to the reality that cloud-based workflows, remote dev environments, and tools that unify development environments are now mainstream. This shift isn’t just a trend — it’s redefining how teams collaborate and deliver software efficiently.


Developer Productivity Still Faces Friction Points

The report highlights that, despite improvements in tooling and culture, many teams still experience bottlenecks in everyday work:

  • Pull requests stuck in review
  • Tasks without clear estimates
  • Slowdowns in the “inner development loop”

Even with great culture and tooling, friction still exists, especially around planning and execution. Knowing where dev productivity stalls helps us focus improvements where they matter most.


Learning Is Shifting to Self-Guided, Online Resources

Developers are reinventing how they learn:

  • 85% of respondents use online courses or certifications
  • Traditional sources like books or on-the-job training are less dominant

This highlights a bigger trend in continuous learning and self-driven skill development — especially important as the pace of change in languages, platforms, and architectures continues to accelerate.


AI Adoption Is Real, But Not Uniform

AI continues to influence how software is built, but adoption is still uneven:

  • Some teams are deeply integrating AI tools
  • Others are more cautious or selective

The report frames AI as an enabler, not a magic bullet. Developers are using AI to assist with documentation, research, and repetitive tasks, but real productivity gains depend on meaningful integration into workflows and data quality.


Security Is a True Team Effort

Security is no longer siloed:

  • Teams of all sizes report that developers, leads, and operations are involved in security
  • Only a small fraction of organizations outsource security entirely

The idea that “security is someone else’s job” is gone — fixing vulnerabilities and embedding security thinking into the development lifecycle is now a collective responsibility.


What This All Means for Developers

Taken together, these findings show a software landscape that’s:

  • More distributed and cloud-native
  • More self-taught and adaptable
  • More collaborative around security
  • Still facing persistent productivity barriers

These trends have real implications for how we build teams, invest in tooling, and think about developer experience.


Go Read the Full Report

The 2025 Docker State of Application Development Report is packed with additional insights, data, and analysis. Whether you’re a developer curious about AI adoption, a manager thinking about remote workflows, or a team lead prioritizing security practices, there’s something in this report for you.

Check out the full report on Docker’s blog:
https://www.docker.com/blog/2025-docker-state-of-app-dev

Read more

Azure Hub-and-Spoke Architecture Explained and Automated with OpenTofu

This is my first blog of the new year (2026)! Since being re-awarded as a Microsoft MVP, Microsoft provided me with a fresh set of Azure credits. One of the first things I wanted to do was rebuild my Azure lab environment. This time, I wanted to do it the right way. I wanted it to mirror how I would design and deploy a real enterprise environment, including running fully on private endpoints and following a proper hub-and-spoke network model.

Just as importantly, I wanted everything defined in Infrastructure as Code (IaC) so I could spin environments up and down whenever I needed. That also aligns perfectly with what my team at Jamf is working on right now. We are making some changes to our underlying Azure architecture, including deeper network isolation, security controls, intergration with Jamf security cloud security products, and a shift from Bicep to OpenTofu. We will also be using AI agents to do a lot of the heavy lifting in that refactor. I will be sharing more about that in future blogs and talks as much as I am able to publicly.

Because OpenTofu is at the center of that work, I decided to build my entire Azure lab using OpenTofu and a full hub-and-spoke architecture. This gives my team a real, working reference base implementation that we can build on for production designs. I also want to share this with the larger tech community.

If you are note familiar with OpenTofu it is an open source infrastructure-as-code engine based on Terraform that lets you define, deploy, and manage cloud infrastructure using declarative configuration files, and you can learn more at https://opentofu.org.

You can access the GitHub Repository of my “OpenTofu Azure Hub and Spoke” solution here: https://github.com/Buchatech/OpenTofu-Azure-HubSpoke-public

Lets break down whats in the solution I built.


Solution Architecture

The solution deploys a production-style Azure network and platform foundation that includes:

  • Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
  • Spoke VNet with peering and default routes through the firewall
  • Key Vault and Azure Container Registry using private endpoints
  • Optional Jumpbox VM for secure management access
  • GitHub Actions CI/CD pipeline using OIDC authentication

How the Automation Works

This is a multi-part solution built around a bootstrap Bash script (bootstrap.sh) and a fully generated OpenTofu repository.

The bootstrap script creates everything you need to get started:

  1. It creates an Azure Storage Account to store your OpenTofu remote state.
  2. It generates a complete OpenTofu project, including modules, variables, and environment structure.
  3. It configures the backend so OpenTofu uses Azure Storage for state.
  4. It creates a ready-to-use GitHub Actions pipeline for CI/CD.

Once the repository is generated, you can deploy your Azure environment by running OpenTofu locally or by pushing the repo to GitHub and letting the pipeline handle deployments for you. Within minutes, you can have a fully functional Azure hub-and-spoke environment up and running, and you can customize the generated modules to fit your own requirements.


Deployment Modes

The bootstrap bash script supports two deployment modes depending on how advanced and locked-down you want the environment to be.

FULL Mode (Default)
This is the enterprise-grade option.

  • Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
  • Spoke VNet with peering and default route through the firewall
  • Private endpoints for Key Vault and Azure Container Registry
  • Optional Jumpbox VM for secure management
  • GitHub Actions CI/CD pipeline with OIDC authentication

BASIC Mode
This is a simpler version for learning or labs.

  • Hub VNet with Azure Firewall only
  • Spoke VNet with peering and default route through the firewall
  • Public access for Key Vault and Azure Container Registry
  • No Jumpbox, VPN Gateway, or DNS Private Resolver
  • GitHub Actions CI/CD pipeline with OIDC authentication

What the bootstrap.sh Script Does

When you run the bootstrap script, it will:

  1. Prompt you to select FULL or BASIC deployment mode
  2. Create an Azure Storage Account for OpenTofu remote state in rg-tfstate
  3. Generate the full OpenTofu repository structure based on your choice
  4. Configure the OpenTofu backend to use the storage account
  5. Create GitHub Actions workflow files for CI/CD
  6. Output the storage account details and the GitHub secrets you need to configure

From there, you are ready to deploy and customize the script and OpenTofu based on your Azure hub-and-spoke environment entirely through code.

Here is the Readme from the repo. It goes even more in depth into my “OpenTofu Azure Hub and Spoke” solution. I hope you find it useful!

********************************************************************************

Azure Hub-Spoke with OpenTofu

Azure base network architecture solution

This repository contains a production-ready, modular OpenTofu configuration that deploys Azure hub-spoke network topology with two deployment modes (private or public) to match your requirements and budget.


Architecture Overview

This solution deploys a hub-and-spoke network architecture (visual shows full-private deployment):

Enterprise-grade Azure network architecture lab environment with Site-to-Site VPN, Azure Firewall, DNS Private Resolver, and core services

This repository contains a production-ready, modular OpenTofu (Terraform) configuration that deploys a complete Azure hub-spoke network topology designed for hybrid cloud scenarios, connecting your on-premises network (e.g., UniFi network) to Azure.

Architecture Overview

This lab deploys a hub-and-spoke network architecture following Azure best practices (visual shows full private deployment):

┌──────────────────────────────────────────────────────────────────────┐
│                            AZURE CLOUD                                │
│                                                                        │
│  ┌─── HUB VNet (rg-lab-hub-network) ────────────────────────┐        │
│  │ 10.10.0.0/16                                              │        │
│  │                                                            │        │
│  │  ┌──────────┐  ┌───────────┐  ┌────────────┐  ┌───────┐ │        │
│  │  │  Azure   │  │    VPN    │  │    DNS     │  │Jumpbox│ │        │
│  │  │ Firewall │  │  Gateway  │  │  Private   │  │  VM   │ │        │
│  │  │(10.10.1.0│  │(10.10.2.0)│  │  Resolver  │  │(Mgmt) │ │        │
│  │  │)+ DNAT   │  │           │  │(10.10.4-5.0│  │subnet │ │        │
│  │  │SSH:2222  │  │           │  │)           │  │       │ │        │
│  │  └─────┬────┘  └─────┬─────┘  └────────────┘  └───────┘ │        │
│  │        │             │                                     │        │
│  │        │             │  Site-to-Site VPN                  │        │
│  └────────┼─────────────┼─────────────────────────────────────┘        │
│           │             │                                               │
│           │  VNet Peering + Gateway Transit                            │
│           │             │                                               │
│  ┌────────▼─ SPOKE VNet (rg-lab-spoke1-network) ──────┐               │
│  │ 10.20.0.0/16                                        │               │
│  │                                                      │               │
│  │  ┌──────────┐  ┌──────────┐  ┌──────────────────┐ │               │
│  │  │   Apps   │  │   APIs   │  │   Data/Services  │ │               │
│  │  │ Subnet   │  │ Subnet   │  │     Subnet       │ │               │
│  │  │          │  │          │  │  - ACR (Private) │ │               │
│  │  │          │  │          │  │  - Key Vault     │ │               │
│  │  └──────────┘  └──────────┘  └──────────────────┘ │               │
│  │                                                      │               │
│  │  Traffic routed through Azure Firewall ─────────────┘               │
│  └──────────────────────────────────────────────────────               │
│                                                                         │
│  ┌─── Management RG (rg-lab-management) ────────────┐                 │
│  │  - Azure Container Registry (ACR)                 │                 │
│  │  - Azure Key Vault                                 │                 │
│  │  - Private Endpoints in Spoke Data subnet         │                 │
│  └────────────────────────────────────────────────────┘                 │
│                                                                         │
└─────────────────────────────┬───────────────────────────────────────────┘
                              │
                      S2S VPN Tunnel (IPsec)
                              │
              ┌───────────────▼──────────────┐
              │   ON-PREMISES NETWORK        │
              │   (e.g., UniFi Router)       │
              │   192.168.1.0/24             │
              │                              │
              │   SSH → Azure Firewall:2222  │
              │   → DNAT → Jumpbox:22        │
              └──────────────────────────────┘

Read more

Docker Hardened Images Are Now Free: What This Means for Developers and Platform Teams

Last week Docker made a big move for the container ecosystem. Docker Hardened Images (DHI) are now free and open source, making secure container foundations accessible to everyone.

If you build, deploy, or operate containerized workloads, this is one of those changes that quietly but meaningfully improves day to day security and reliability.

Let’s break down what Docker Hardened Images are, why they matter, and how you can start using them today.

What Are Docker Hardened Images?

Docker Hardened Images are base container images that come pre-hardened for security and transparency. Instead of starting from a generic base image and layering on your own security practices, DHI gives you a safer starting point out of the box.

They are designed to reduce common container risks without adding operational overhead or complexity.

In practical terms, this means Docker has already done the work many teams struggle to keep up with.


What You Get Out of the Box

When you use Docker Hardened Images, your base images now:

  • Include automated security metadata
  • Are minimalist and optimized for faster builds and startup times
  • Contain significantly fewer known vulnerabilities (CVEs) from the start
  • Are fully free and open source

This shifts container security left, right to the foundation of your application images.

There still is a paid version of Docker Hardened Images for those that have enterprise needs. Here is a breakdown of what you get with the Free Docker Hardened Images and the Paid version.


Why This Is a Big Deal

Most container vulnerabilities originate from base images. Teams often inherit outdated packages, unused libraries, or poorly maintained dependencies without realizing it.

Docker Hardened Images help address that by:

  • Reducing the attack surface before you write any application code
  • Improving transparency into what is inside your images
  • Lowering the burden on platform and security teams
  • Making secure defaults accessible even to small teams and solo developers

Security becomes the baseline rather than an afterthought.

Read more

“Building Apps with OpenAI” my 29th Pluralsight Course!

I am excited to share that my 29th Pluralsight course is now live titled Building Applications with OpenAI. This course guides developers through creating modern AI powered applications using OpenAI APIs. Whether you are just getting started with generative AI or looking to integrate it into real projects, you will walk away with practical skills you can use right away.

This was a fun course to build. In this course you will learn how to integrate OpenAI into real world applications from end to end. We begin by setting up the OpenAI API, handling authentication, and designing effective prompts. Then we build a full stack web app that uses AI to analyze and classify data while exploring best practices for deployment, performance monitoring, and error handling. By the end you will have the confidence to build, deploy, and scale your own AI driven solutions.

🧠 Why This Course Matters

Generative AI is reshaping how software gets built and developers are expected to know how to integrate these capabilities into applications. This course gives you the foundational and practical knowledge to do that. You will see how to handle prompt refinement, token limits, deployment tradeoffs, and optimization strategies.

📘 Official Course Description

Generative AI is changing how software is developed, and developers are now expected to integrate AI features into modern applications. In this course, Building Applications with OpenAI, you’ll gain the skills to build, deploy, and maintain AI-powered web applications. First, you’ll explore how to configure the OpenAI API, manage authentication, and craft effective prompts. Next, you’ll build a full-stack expense tracking app that uses OpenAI to analyze and categorize expenses. Finally, you’ll learn how to deploy your app using platforms like Render or Google Cloud, monitor performance, and handle challenges such as token limits, error handling, and prompt optimization. When you’re finished with this course, you’ll have the knowledge and tools to confidently integrate OpenAI into your own applications and bring AI capabilities to your development projects.

This course is a part of the “OpenAI for Developers Path” on Pluralsight. The path can be found here: https://app.pluralsight.com/paths/skills/openai-for-developers and has many courses that will teach you various aspects of bringing OpenAI into your applications.


If you’re building applications and need to add AI, this course will help you. Check out the course here:

https://www.pluralsight.com/courses/building-applications-openai

I hope this course serves as a valuable resource in your AI journey. Thank you for your continued support, and Be sure to follow my profile on Pluralsight so you will be notified as I release new courses

Here is the link to my Pluralsight profile to follow me:

https://www.pluralsight.com/authors/steve-buchanan

Read more

My First Docker Captain Summit Experience

As many of you know, I was honored to be named a Docker Captain earlier this year (2025). This week, I had the incredible opportunity to attend my very first Docker Captain Summit, and what an experience it was.

The event reminded me a bit of the Microsoft MVP Summit, but with even closer access to the Docker product teams across multiple areas. Every year, the Captain Summit takes place in a different location, bringing together Docker staff from product groups, community management, marketing, and DevRel, along with fellow Docker Captains from around the world.

At the summit, we got an inside look at Docker’s roadmap and were among the first to learn about upcoming products and initiatives. We also had the opportunity to provide direct feedback to the product teams, helping shape the future of Docker from the community’s perspective.

This year’s summit was held in Istanbul, and it was a fantastic few days of connecting with so many brilliant people. I finally met in person several Docker staff members and Captains I’ve been collaborating with online. It was also a chance to reunite with friends from Microsoft and the MVP community.

Of course, not everything we discussed can be shared publicly because of NDAs, but I can tell you that we all walked away with some exciting insights and some awesome Docker swag.

Read more

Presenting at Applied AI 2025 Conf

I’m excited to announce that in a couple of weeks I’ll be speaking at the upcoming Applied AI Conference, an event bringing together innovators, researchers, and industry leaders who are shaping the future of Artificial Intelligence.

The Applied AI Conference is all about actionable insights where ideas meet execution. I’m looking forward to sharing lessons learned from my AI journey, hearing from other brilliant minds in the community, and connecting with attendees who are just as passionate about AI innovation.

Why This Conference Matters

The AI landscape is evolving fast, and events like the Applied AI Conference create space for meaningful conversations about what’s next. It’s not just about tools and models it’s about empowering people, teams, and organizations to make smarter, faster decisions with AI.

This year, I’ll be giving two sessions, one being a session and the other being a fireside chat with Mike Jackson. Here is more information about my sessions:

My Sessions at Applied AI Conference

The Easiest Way to Run LLMs Locally: Meet Docker Model Runner

Curious about running large language models (LLMs) on your own machine without wrestling with complicated setups? In this session, I’ll introduce Docker Model Runner, a new feature in Docker Desktop that makes it incredibly easy to run LLMs locally.

Whether you’re a developer experimenting with AI, building offline applications, or simply looking for more control over your models, this session will show you how to get started in minutes. We’ll explore real examples and walk through what makes Docker Model Runner such a powerful addition for anyone working with AI tools.

This is perfect for anyone who wants to move fast with local AI experimentation, without needing to manage complex infrastructure or cloud dependencies.


Fireside Chat: Beyond the Hype – Practical AI Integration in Business

In this fireside chat titled “Beyond the Hype: Practical AI Integration in Business,” I’ll join Mike Jackson for a moderated discussion focused on how organizations can effectively adopt AI in the real world.

We’ll move past buzzwords to talk about real challenges, lessons learned, and success stories from our jouerneys working with AI so far. I’ll be drawing from my experience as an enterprise cloud leader, Microsoft MVP, author, and startup advisor, I’ll share how companies can strategically approach AI adoption from proof of concept to production use.

If you’re interested in how AI can truly add business value (not just headlines), this conversation will offer insights you can take back to your organization.

I’m honored to be presenting here and can’t wait to connect with the broader AI and developer community during the event.

If you’re attending, I’d love to see you there.
Check out the full speaker lineup here: appliedaiconf.com/speaker-directory

Read more

It’s Been a Year – Microsoft MVP for the 11th Time!

What a ride this year has been. Back in May, my entire team was eliminated and I was laid off from Microsoft. Not long after, I was honored to be named a Docker Captain, and soon after that I landed a new role leading Azure and AKS at Jamf, helping run their SaaS products in the cloud.

And yesterday, I found out that I’ve been re-awarded as a Microsoft MVP! This marks my 11th year as an MVP, all in the span of just a few months of major ups and downs. After a short detour (just under four years) working at the mothership, I’m excited to be back in the MVP community.

I never take this recognition for granted. It’s an honor to return to the MVP ranks and continue contributing as a community champion in the worlds of Microsoft, Azure, Azure Kubernetes Service, AI, and Open Source.

To all the other MVPs who were renewed—and to the new awardees announced on October 1—congratulations!

Stay tuned!

Read more