It’s hard to believe, but it’s been a couple of years since I last sat down with Richard Campbell on RunAs Radio. Technology moves fast, but the cloud landscape has matured in ways that were only just beginning during my last visit.
I recently joined Richard for my third appearance on the show (Episode #1025) to talk about a challenge that is becoming the “new normal” for major SaaS providers: Expanding a Cloud-Native stack across multiple clouds.
From Single-Cloud Roots to Multi-Cloud Reality
At Jamf, we’ve built a powerful reputation for managing Apple devices at scale. Historically, our SaaS product was rooted in AWS. However, as our customer base grows, now serving over 70k+ customers worldwide the demand for flexibility grows with it.
In this episode, we discuss the journey of bringing those SaaS workloads to Azure and AKS. It isn’t just about “moving” code; it’s about architecting for consistency without losing the unique benefits of each cloud provider.
Kubernetes: The Common Ground (But Not the Whole Story)
One of the key takeaways from our chat is that while Kubernetes (AKS, EKS, GKE) provides the common operating system for the modern cloud, it isn’t a “magic wand” for multi-cloud.
To achieve true consistency, you have to look past the orchestrator and focus on the surrounding ecosystem. We dove into the complexities of:
IaC & Deployment: Why tools like OpenTofu are becoming essential for maintaining cloud-agnostic deployments.
Observability: Using Prometheus and Grafana to ensure that your SRE teams see the same data regardless of whether the backend is Azure or AWS.
Identity: Navigating the friction between different identity providers to ensure a seamless experience for the end user and how platforms like Okta support this.
The Docker & AI Connection
We couldn’t have a conversation in 2026 without touching on the elephant in the room: AI. As a Microsoft MVP focused on AKS and a Docker Captain, I’ve been watching closely how the Kubernetes and container ecosystem is evolving to support AI/ML workloads. Richard and I spent some time discussing how Docker, Inc. is positioning itself in this space and how developers can leverage these tools to build AI-ready applications without getting locked into a single vendor’s proprietary stack.
Reflections on a Maturing Landscape
Coming back to RunAs Radio for a third time allowed me to reflect on just how much our industry has shifted. We’ve moved past the “is the cloud safe?” phase and into the “how do we optimize for a multi-cloud world?” phase.
Whether you are a platform engineer, a developer, or a technical leader, the lessons I’ve learned at Accenture, Microsoft, helping startups, and now at Jamf while scaling across multple clouds are applicable to almost any modern enterprise.
I am excited to share that I will be speaking at this year’s Open Source North conference on May 29, 2025, at the University of St. Thomas in St. Paul.
This year, I’m teaming up with my fellow Jamf, Levi McCormick (Director of Engineering at Jamf), for a session that is very close to our daily reality: Multi-Cloud Without the Marketing or Designing for Multi-Cloud Without Losing Your Mind.
Why this talk? In the cloud industry, “Multi-Cloud”, “Cloud Native”, and “Iac via Terraform” are often sold as magic pills for redundancy, cost savings, unifaction and more across clouds. But for the people actually building and maintaining these systems, it can often feel like a recipe for complexity and technical debt.
At Jamf, Levi and I work on our infrastructure efforts across AWS, Azure, and GCP. We’ve learned—sometimes the hard way—what works, what doesn’t, and where the “hype” version of cloud differs from the “production” version. We wanted to build a session that focuses on the practical:
How to design for portability without over-engineering.
Managing identity, networking, and security across different providers.
Avoiding the “lowest common denominator” trap.
Keeping your sanity while managing three different clouds.
Open Source North is a great local event to the MN Tech scene because of the high-caliber community and the focus on real-world engineering. Whether you are a cloud veteran or just starting to look at a second provider, we’d love to see you there.
The Details:
Conference: Open Source North 2025
Date: May 29, 2025
Location: University of St. Thomas (St. Paul Campus)
This is my first blog of the new year (2026)! Since being re-awarded as a Microsoft MVP, Microsoft provided me with a fresh set of Azure credits. One of the first things I wanted to do was rebuild my Azure lab environment. This time, I wanted to do it the right way. I wanted it to mirror how I would design and deploy a real enterprise environment, including running fully on private endpoints and following a proper hub-and-spoke network model.
Just as importantly, I wanted everything defined in Infrastructure as Code (IaC) so I could spin environments up and down whenever I needed. That also aligns perfectly with what my team at Jamf is working on right now. We are making some changes to our underlying Azure architecture, including deeper network isolation, security controls, intergration with Jamf security cloud security products, and a shift from Bicep to OpenTofu. We will also be using AI agents to do a lot of the heavy lifting in that refactor. I will be sharing more about that in future blogs and talks as much as I am able to publicly.
Because OpenTofu is at the center of that work, I decided to build my entire Azure lab using OpenTofu and a full hub-and-spoke architecture. This gives my team a real, working reference base implementation that we can build on for production designs. I also want to share this with the larger tech community.
If you are note familiar with OpenTofu it is an open source infrastructure-as-code engine based on Terraform that lets you define, deploy, and manage cloud infrastructure using declarative configuration files, and you can learn more at https://opentofu.org.
The solution deploys a production-style Azure network and platform foundation that includes:
Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
Spoke VNet with peering and default routes through the firewall
Key Vault and Azure Container Registry using private endpoints
Optional Jumpbox VM for secure management access
GitHub Actions CI/CD pipeline using OIDC authentication
How the Automation Works
This is a multi-part solution built around a bootstrap Bash script (bootstrap.sh) and a fully generated OpenTofu repository.
The bootstrap script creates everything you need to get started:
It creates an Azure Storage Account to store your OpenTofu remote state.
It generates a complete OpenTofu project, including modules, variables, and environment structure.
It configures the backend so OpenTofu uses Azure Storage for state.
It creates a ready-to-use GitHub Actions pipeline for CI/CD.
Once the repository is generated, you can deploy your Azure environment by running OpenTofu locally or by pushing the repo to GitHub and letting the pipeline handle deployments for you. Within minutes, you can have a fully functional Azure hub-and-spoke environment up and running, and you can customize the generated modules to fit your own requirements.
Deployment Modes
The bootstrap bash script supports two deployment modes depending on how advanced and locked-down you want the environment to be.
FULL Mode (Default) This is the enterprise-grade option.
Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
Spoke VNet with peering and default route through the firewall
Private endpoints for Key Vault and Azure Container Registry
Optional Jumpbox VM for secure management
GitHub Actions CI/CD pipeline with OIDC authentication
BASIC Mode This is a simpler version for learning or labs.
Hub VNet with Azure Firewall only
Spoke VNet with peering and default route through the firewall
Public access for Key Vault and Azure Container Registry
No Jumpbox, VPN Gateway, or DNS Private Resolver
GitHub Actions CI/CD pipeline with OIDC authentication
What the bootstrap.sh Script Does
When you run the bootstrap script, it will:
Prompt you to select FULL or BASIC deployment mode
Create an Azure Storage Account for OpenTofu remote state in rg-tfstate
Generate the full OpenTofu repository structure based on your choice
Configure the OpenTofu backend to use the storage account
Create GitHub Actions workflow files for CI/CD
Output the storage account details and the GitHub secrets you need to configure
From there, you are ready to deploy and customize the script and OpenTofu based on your Azure hub-and-spoke environment entirely through code.
Here is the Readme from the repo. It goes even more in depth into my “OpenTofu Azure Hub and Spoke” solution. I hope you find it useful!
This repository contains a production-ready, modular OpenTofu configuration that deploys Azure hub-spoke network topology with two deployment modes (private or public) to match your requirements and budget.
Architecture Overview
This solution deploys a hub-and-spoke network architecture (visual shows full-private deployment):
Enterprise-grade Azure network architecture lab environment with Site-to-Site VPN, Azure Firewall, DNS Private Resolver, and core services
This repository contains a production-ready, modular OpenTofu (Terraform) configuration that deploys a complete Azure hub-spoke network topology designed for hybrid cloud scenarios, connecting your on-premises network (e.g., UniFi network) to Azure.
Architecture Overview
This lab deploys a hub-and-spoke network architecture following Azure best practices (visual shows full private deployment):