Azure Hub-and-Spoke Architecture Explained and Automated with OpenTofu

This is my first blog of the new year (2026)! Since being re-awarded as a Microsoft MVP, Microsoft provided me with a fresh set of Azure credits. One of the first things I wanted to do was rebuild my Azure lab environment. This time, I wanted to do it the right way. I wanted it to mirror how I would design and deploy a real enterprise environment, including running fully on private endpoints and following a proper hub-and-spoke network model.

Just as importantly, I wanted everything defined in Infrastructure as Code (IaC) so I could spin environments up and down whenever I needed. That also aligns perfectly with what my team at Jamf is working on right now. We are making some changes to our underlying Azure architecture, including deeper network isolation, security controls, intergration with Jamf security cloud security products, and a shift from Bicep to OpenTofu. We will also be using AI agents to do a lot of the heavy lifting in that refactor. I will be sharing more about that in future blogs and talks as much as I am able to publicly.

Because OpenTofu is at the center of that work, I decided to build my entire Azure lab using OpenTofu and a full hub-and-spoke architecture. This gives my team a real, working reference base implementation that we can build on for production designs. I also want to share this with the larger tech community.

If you are note familiar with OpenTofu it is an open source infrastructure-as-code engine based on Terraform that lets you define, deploy, and manage cloud infrastructure using declarative configuration files, and you can learn more at https://opentofu.org.

You can access the GitHub Repository of my “OpenTofu Azure Hub and Spoke” solution here: https://github.com/Buchatech/OpenTofu-Azure-HubSpoke-public

Lets break down whats in the solution I built.


Solution Architecture

The solution deploys a production-style Azure network and platform foundation that includes:

  • Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
  • Spoke VNet with peering and default routes through the firewall
  • Key Vault and Azure Container Registry using private endpoints
  • Optional Jumpbox VM for secure management access
  • GitHub Actions CI/CD pipeline using OIDC authentication

How the Automation Works

This is a multi-part solution built around a bootstrap Bash script (bootstrap.sh) and a fully generated OpenTofu repository.

The bootstrap script creates everything you need to get started:

  1. It creates an Azure Storage Account to store your OpenTofu remote state.
  2. It generates a complete OpenTofu project, including modules, variables, and environment structure.
  3. It configures the backend so OpenTofu uses Azure Storage for state.
  4. It creates a ready-to-use GitHub Actions pipeline for CI/CD.

Once the repository is generated, you can deploy your Azure environment by running OpenTofu locally or by pushing the repo to GitHub and letting the pipeline handle deployments for you. Within minutes, you can have a fully functional Azure hub-and-spoke environment up and running, and you can customize the generated modules to fit your own requirements.


Deployment Modes

The bootstrap bash script supports two deployment modes depending on how advanced and locked-down you want the environment to be.

FULL Mode (Default)
This is the enterprise-grade option.

  • Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
  • Spoke VNet with peering and default route through the firewall
  • Private endpoints for Key Vault and Azure Container Registry
  • Optional Jumpbox VM for secure management
  • GitHub Actions CI/CD pipeline with OIDC authentication

BASIC Mode
This is a simpler version for learning or labs.

  • Hub VNet with Azure Firewall only
  • Spoke VNet with peering and default route through the firewall
  • Public access for Key Vault and Azure Container Registry
  • No Jumpbox, VPN Gateway, or DNS Private Resolver
  • GitHub Actions CI/CD pipeline with OIDC authentication

What the bootstrap.sh Script Does

When you run the bootstrap script, it will:

  1. Prompt you to select FULL or BASIC deployment mode
  2. Create an Azure Storage Account for OpenTofu remote state in rg-tfstate
  3. Generate the full OpenTofu repository structure based on your choice
  4. Configure the OpenTofu backend to use the storage account
  5. Create GitHub Actions workflow files for CI/CD
  6. Output the storage account details and the GitHub secrets you need to configure

From there, you are ready to deploy and customize the script and OpenTofu based on your Azure hub-and-spoke environment entirely through code.

Here is the Readme from the repo. It goes even more in depth into my “OpenTofu Azure Hub and Spoke” solution. I hope you find it useful!

********************************************************************************

Azure Hub-Spoke with OpenTofu

Azure base network architecture solution

This repository contains a production-ready, modular OpenTofu configuration that deploys Azure hub-spoke network topology with two deployment modes (private or public) to match your requirements and budget.


Architecture Overview

This solution deploys a hub-and-spoke network architecture (visual shows full-private deployment):

Enterprise-grade Azure network architecture lab environment with Site-to-Site VPN, Azure Firewall, DNS Private Resolver, and core services

This repository contains a production-ready, modular OpenTofu (Terraform) configuration that deploys a complete Azure hub-spoke network topology designed for hybrid cloud scenarios, connecting your on-premises network (e.g., UniFi network) to Azure.

Architecture Overview

This lab deploys a hub-and-spoke network architecture following Azure best practices (visual shows full private deployment):

┌──────────────────────────────────────────────────────────────────────┐
│                            AZURE CLOUD                                │
│                                                                        │
│  ┌─── HUB VNet (rg-lab-hub-network) ────────────────────────┐        │
│  │ 10.10.0.0/16                                              │        │
│  │                                                            │        │
│  │  ┌──────────┐  ┌───────────┐  ┌────────────┐  ┌───────┐ │        │
│  │  │  Azure   │  │    VPN    │  │    DNS     │  │Jumpbox│ │        │
│  │  │ Firewall │  │  Gateway  │  │  Private   │  │  VM   │ │        │
│  │  │(10.10.1.0│  │(10.10.2.0)│  │  Resolver  │  │(Mgmt) │ │        │
│  │  │)+ DNAT   │  │           │  │(10.10.4-5.0│  │subnet │ │        │
│  │  │SSH:2222  │  │           │  │)           │  │       │ │        │
│  │  └─────┬────┘  └─────┬─────┘  └────────────┘  └───────┘ │        │
│  │        │             │                                     │        │
│  │        │             │  Site-to-Site VPN                  │        │
│  └────────┼─────────────┼─────────────────────────────────────┘        │
│           │             │                                               │
│           │  VNet Peering + Gateway Transit                            │
│           │             │                                               │
│  ┌────────▼─ SPOKE VNet (rg-lab-spoke1-network) ──────┐               │
│  │ 10.20.0.0/16                                        │               │
│  │                                                      │               │
│  │  ┌──────────┐  ┌──────────┐  ┌──────────────────┐ │               │
│  │  │   Apps   │  │   APIs   │  │   Data/Services  │ │               │
│  │  │ Subnet   │  │ Subnet   │  │     Subnet       │ │               │
│  │  │          │  │          │  │  - ACR (Private) │ │               │
│  │  │          │  │          │  │  - Key Vault     │ │               │
│  │  └──────────┘  └──────────┘  └──────────────────┘ │               │
│  │                                                      │               │
│  │  Traffic routed through Azure Firewall ─────────────┘               │
│  └──────────────────────────────────────────────────────               │
│                                                                         │
│  ┌─── Management RG (rg-lab-management) ────────────┐                 │
│  │  - Azure Container Registry (ACR)                 │                 │
│  │  - Azure Key Vault                                 │                 │
│  │  - Private Endpoints in Spoke Data subnet         │                 │
│  └────────────────────────────────────────────────────┘                 │
│                                                                         │
└─────────────────────────────┬───────────────────────────────────────────┘
                              │
                      S2S VPN Tunnel (IPsec)
                              │
              ┌───────────────▼──────────────┐
              │   ON-PREMISES NETWORK        │
              │   (e.g., UniFi Router)       │
              │   192.168.1.0/24             │
              │                              │
              │   SSH → Azure Firewall:2222  │
              │   → DNAT → Jumpbox:22        │
              └──────────────────────────────┘

Read more

“Building Apps with OpenAI” my 29th Pluralsight Course!

I am excited to share that my 29th Pluralsight course is now live titled Building Applications with OpenAI. This course guides developers through creating modern AI powered applications using OpenAI APIs. Whether you are just getting started with generative AI or looking to integrate it into real projects, you will walk away with practical skills you can use right away.

This was a fun course to build. In this course you will learn how to integrate OpenAI into real world applications from end to end. We begin by setting up the OpenAI API, handling authentication, and designing effective prompts. Then we build a full stack web app that uses AI to analyze and classify data while exploring best practices for deployment, performance monitoring, and error handling. By the end you will have the confidence to build, deploy, and scale your own AI driven solutions.

🧠 Why This Course Matters

Generative AI is reshaping how software gets built and developers are expected to know how to integrate these capabilities into applications. This course gives you the foundational and practical knowledge to do that. You will see how to handle prompt refinement, token limits, deployment tradeoffs, and optimization strategies.

📘 Official Course Description

Generative AI is changing how software is developed, and developers are now expected to integrate AI features into modern applications. In this course, Building Applications with OpenAI, you’ll gain the skills to build, deploy, and maintain AI-powered web applications. First, you’ll explore how to configure the OpenAI API, manage authentication, and craft effective prompts. Next, you’ll build a full-stack expense tracking app that uses OpenAI to analyze and categorize expenses. Finally, you’ll learn how to deploy your app using platforms like Render or Google Cloud, monitor performance, and handle challenges such as token limits, error handling, and prompt optimization. When you’re finished with this course, you’ll have the knowledge and tools to confidently integrate OpenAI into your own applications and bring AI capabilities to your development projects.

This course is a part of the “OpenAI for Developers Path” on Pluralsight. The path can be found here: https://app.pluralsight.com/paths/skills/openai-for-developers and has many courses that will teach you various aspects of bringing OpenAI into your applications.


If you’re building applications and need to add AI, this course will help you. Check out the course here:

https://www.pluralsight.com/courses/building-applications-openai

I hope this course serves as a valuable resource in your AI journey. Thank you for your continued support, and Be sure to follow my profile on Pluralsight so you will be notified as I release new courses

Here is the link to my Pluralsight profile to follow me:

https://www.pluralsight.com/authors/steve-buchanan

Read more

It’s Been a Year – Microsoft MVP for the 11th Time!

What a ride this year has been. Back in May, my entire team was eliminated and I was laid off from Microsoft. Not long after, I was honored to be named a Docker Captain, and soon after that I landed a new role leading Azure and AKS at Jamf, helping run their SaaS products in the cloud.

And yesterday, I found out that I’ve been re-awarded as a Microsoft MVP! This marks my 11th year as an MVP, all in the span of just a few months of major ups and downs. After a short detour (just under four years) working at the mothership, I’m excited to be back in the MVP community.

I never take this recognition for granted. It’s an honor to return to the MVP ranks and continue contributing as a community champion in the worlds of Microsoft, Azure, Azure Kubernetes Service, AI, and Open Source.

To all the other MVPs who were renewed—and to the new awardees announced on October 1—congratulations!

Stay tuned!

Read more

Guest on Code To Cloud Podcast – AI, Cloud, Career Resilience, and Farming

I’m excited to share that I recently sat down again with former colleuage at Microsoft Kevin Evans on the “Code to Cloud” podcast for a conversation titled “AI, Cloud, and Career Resilience.” It has been a couple of years since I was on as a guest on his podcast. This discussion was super fun and goes all over the place from personal finance (Dave Ramsey we are coming for the top spot!), leaving tech to farm, to the recent layoffs at Microsoft, what AI means for all of us, and more.

You can listen on Spotify, Apple Podcasts, or watch the full episode on YouTube.

Spotify: https://open.spotify.com/episode/1jMf7mRZNxew6trsWt8e96

Apple Podcasts link: https://podcasts.apple.com/us/podcast/ai-cloud-and-career-resilience-with-steve-buchanan/id1788423999?i=1000729123487

YouTube: https://www.youtube.com/watch?v=vmo7MdmGj-s

In this post, I wanted to share some of the highlights, key takeaways, and a few behind-the-scenes thoughts from recording.

On the podcast, Kevin and I dug into several topics, especially in today’s rapidly evolving tech landscape. Some of the themes we touched on are:

Leadership & owning your narrative
I shared lessons I’ve learned in leadership like how to set vision, how to manage through change, and how leaders can help their teams navigate ambiguity.
We also talked about taking control of your narrative rather than letting circumstances or others define it for you.

My journey in tech
We walked through my career path over the years. The ups, the challenges, the moments of uncertainty. And I shared about recently being laid off from Microsoft, pivoting roles, and how those moments shaped and continue to shape my approach to owning my career.

Career resilience and mindset
One of the things I emphasized is that resilience is not just bouncing back, it’s proactively preparing, adapting, and taking charge of your trajectory. We talked about strategies to stay relevant: continuous learning, building a network, personal branding, and leaning into uncertainty instead of resisting it.

AI + Cloud: Opportunities and disruption
We explored how AI is weaving into cloud-native infrastructure and application stacks, and what that means for technologists.
We also addressed how to stay grounded amidst hype and understanding what’s real, what’s emerging, and how to plug into it in a practical, impactful way.

Key Takeaways and Advice for You

If you are reading this, here are a few of the ideas I hope will stick with you:

Do not wait for perfect context. The ideal job or environment might not exist yet. Instead, start shaping it yourself. Build the skills, forge relationships, and create momentum where you are.

Be purposeful in how you show up. Your personal brand is not about vanity. It is a vector for opportunities, trust, and alignment. Share your journey, your thinking, your work, even when it feels vulnerable.

Stay curious with humility. In fields like AI and cloud, change is constant. Curiosity keeps you relevant and humility keeps you open to learning when you do not know the answer.

Focus on bridges, not walls. Whether you are navigating careers, organization changes, or technical disruption, build bridges between peers, between domains, and across teams. Avoid insulating yourself.

Your resilience is in your habits. It is not just how you react in a crisis. It is how you cultivate consistency, reflection, incremental growth, and adaptability.

Behind the Mic; A Few Reflections

Recording with Kevin is always fun. His questions push guests to think more deeply than just the “what happened” stories. It was gratifying to revisit earlier chapters in my career after recently being laid off and exploring themes like uncertainty, adaptation, and ownership recurred over time.
I always find it special when conversations like these inspire me as much as I hope they inspire listeners and the host as well! Preparing, sharing, and telling stories helps us all get a little more confident in this unknown tech market.


If you have 45 to 60 minutes to spare, I encourage you to give the episode a listen! You will find not just stories from me but hopefully a few ideas or sparks you can take into your own path!

You can also watch the episode right here:

Read more

Im Speaking at BITCON 2025 – Easiest Way to Run LLMs Locally: Meet Docker Model Runner

🎤 I’m excited to share that I’ll be returning to BITCON in a week! I will be speaking at BITCON 2025, a gathering focused on Black voices in technology, innovation, and community. You can check out the full speaker lineup here: BITCON 2025 Speakers. The conference this year is virtual and its free. You can check out the site here: https://bitcon.blacksintechnology.net

The conference has a ton of great speakers lined up from some of the largest tech companies such as Google, Microsoft, and more. And to top it off the keynote this year is Kelsey Hightower! You dont want to miss this one.

My Session: “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”
Docker Captain: Steve Buchanan DMR session

At BITCON, I’ll be presenting “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”. In this session, I’ll look at:

  • Why run LLMs locally? The benefits in terms of cost, privacy, latency, and control
  • How Docker Model Runner simplifies things — containerizing large models, managing dependencies, and lowering friction
  • Demo and walkthrough — showing you step by step how to get a model up and running on your own machine or server
  • Best practices, pitfalls, and tips — what I’ve learned building and deploying these systems
  • Q&A / hands-on help — to get you started with your own setup

My goal is that attendees leave with a concrete, reproducible process they can apply right away.

Why It Matters

Large language models (LLMs) are powerful, but running them locally has often felt out of reach for smaller teams, indie devs, or people in resource-constrained environments. With the right tooling (like Docker Model Runner), we can lower that barrier—unlocking more experimentation, more privacy, and more control over where and how inference happens.

I believe this aligns well with the mission of BITCON: elevating voices, demystifying advanced tech, and making it accessible. I hope this talk helps bridge a gap for folks who want to explore AI locally without getting lost in infrastructure.

I am excited to be speaking at BITCON again. To learn more about my session check it out here:

BITCon Session: The Easiest Way to Run LLMs Locally: Meet Docker Model Runner

BITCON is free! Be sure to register today: HERE

Read more

Recent Blog Posts: MCP Servers, Dev, Multi-cloud Mastery, and Cloud Engineer Resumes

This is a shorter post, but I wanted to take a moment to share what I’ve been working on lately. Over the past few months I’ve been publishing a steady stream of blog posts on Pluralsight, covering topics across cloud, AI, JavaScript, and beyond. There’s a lot happening in tech right now, and I’ve been fortunate to collaborate with the Pluralsight team to dive into some of these exciting areas:

Check out an overview the blog posts and use the the following links to read more:

Behind the Buzzword: What is MCP (MCP Server)?
A breakdown of MCP servers and why they matter in the evolving landscape of AI.
👉 Read the post

How to Run an LLM Locally on Your Desktop
Exploring why and how you might want to run a large language model on your own machine, with a closer look at Docker Model Runner.
👉 Read the post

What to Emphasize on Your Resume as a Cloud Engineer
Tips on showcasing the skills that make cloud engineers stand out in today’s job market.
👉 Read the post

Multicloud Mastery: How to Train Teams in AWS, Azure, and GCP
Practical advice on enabling engineering teams to work across multiple clouds with confidence.
👉 Read the post

6 Cloud Cost Optimization Strategies and Tools for AWS, Azure, and GCP
A set of proven strategies and tools to help control and reduce cloud spend.
👉 Read the post

How to Add User Authentication to Your JavaScript App
A straightforward guide to securing your JavaScript applications with simple authentication techniques.
👉 Read the post

I’ll be continuing to publish more content in the months ahead, so stay tuned for future posts on cloud-native engineering, AI, and practical developer skills. If you found these articles useful, I’d love for you to check them out and share them with your network.

Read more

Docker Model Runner Blog Post

I’ve been spending a lot of time blogging on Pluralsight lately, and one of my recent posts covered a topic I’m genuinely excited about: running large language models (LLMs) locally. Specifically, I explored a tool called Docker Model Runner that makes this process more accessible for developers.

In the post, I broke down a few key ideas.

Why Run an LLM Locally

There’s a lot of momentum around cloud-hosted AI services, but running models locally still has its place. For many developers it means more control, quicker experimentation, and the ability to work outside of a cloud provider’s ecosystem.

Tools in This Space

Before zeroing in on Docker Model Runner, I broke down other ways developers are running models locally. The landscape is quickly evolving, and each tool has trade-offs in terms of usability, performance, and compatibility with different models.

Why Docker Model Runner

What really stood out to me with Docker Model Runner is how it lowers the barrier to entry. Instead of wrestling with environment setup, dependencies, and GPU drivers, you can pull down a container and get straight to experimenting. It leans into Docker’s strengths of portability and consistency, so whether you’re on a desktop, laptop, or even testing in a lab environment, the experience is smooth and repeatable.

For developers who are curious about LLMs but don’t want to get bogged down in infrastructure, this tool is a great starting point.


If you want the full breakdown and step-by-step details, you can check out my Pluralsight blog here:
👉 https://www.pluralsight.com/resources/blog/ai-and-data/how-run-llm-locally-desktop

Read more

Steve Buchanan on the SuperHuman Mindset Podcast

I recently had the honor of being a guest on the SuperHuman Mindset Podcast, hosted by my good friend and respected CISO and Cybersecurity expert Felix Asare. I have always loved the name of this show because it perfectly captures its mission.

About the Podcast

The podcast dives deep into the minds of extraordinary people who break barriers, push limits, and achieve what many might think is impossible. Each episode uncovers the mindset, habits, and stories behind their success, with the goal of inspiring you to unlock your full potential and elevate every aspect of your life.

Felix is intentional about the guests he brings on. Past guests have included:

  • Former Ghanaian President John Kufuor
  • CISO Amy Bogac
  • Physician Dr. Kambiz Farbakhsh
  • Team USA Gold Medalist Chrissy Holm
  • Many more inspiring individuals

To be invited among such an incredible lineup is something I really consider an honor.

What We Talked About

In this episode, Felix and I went beyond tech and had a wide ranging conversation that touched on:

  • Where AI is heading and how to stay plugged in
  • Authoring books and creating courses including how I overcame imposter syndrome to publish my first one
  • The Modern Developer Experience and Cloud Native trends
  • What it takes to reach the next level in tech
  • What drives me personally and professionally
  • Advice for those working their way to the top

It was one of those conversations that flowed naturally, blending personal stories with big picture insights.

Watch or Listen

You can check out the full episode here:

SuperHuman Mindset Podcast – YouTube

If you are interested in tech, mindset, or just hearing stories of pushing past limits, I think you will enjoy this one.

Read more

My 28th Pluralsight Course Published! Agentic AI Safety and Alignment

I’m excited to announce the release of my 28th Pluralsight course, and it’s a timely one as its about a topic that’s becoming more important by the day: Agentic AI Safety and Alignment.

As AI agent adoption accelerates, developers and product teams are under increasing pressure to ensure these systems behave responsibly. It’s no longer enough to build capable AI agents, they must also operate safely, ethically, and in alignment with your organization’s values.

That’s exactly what this course is about.

🧠 Why This Course Matters

The rise of autonomous AI agents brings incredible potential. but also significant risk. From runaway costs to prompt injection attacks, the stakes are high. In this course, I cover:

  • Prevent unintended behaviors
  • Embed ethics and safety checks into agents
  • Guard against issues like prompt injection
  • Keep human oversight (human in the loop)
  • Avoid unexpected bills or policy violations

To balance the theory and practice, I run through some demos using Microsoft Co-Pilot Studio and Flowise. You’ll see how to put in safety checks, define agent constraints, implement value alignment, and put in controls that keep your agentic AI safe.

📘 Official Course Description

“As companies rapidly adopt autonomous AI agents, developers and product leads face growing pressure to ensure these systems operate safely and align with organizational values. In this course, Agentic AI Safety and Alignment, you’ll gain the ability to design and deploy agentic AI systems that are both effective and ethically sound. First, you’ll explore how to identify potential risks and prevent unintended behaviors in autonomous agents. Next, you’ll discover how to embed your organization’s values by integrating rules and safety checks into your agent design. Finally, you’ll learn how to apply guardrails that keep agents aligned and under control. When you’re finished with this course, you’ll have the skills and knowledge needed to build AI agents that operate responsibly and stay true to your company’s principles.”

If you’re building, leading, or managing AI agent systems, this course will help you. Check out the course here:

https://www.pluralsight.com/courses/agentic-ai-safety-alignment

I hope this course serves as a valuable resource in your AI journey. Thank you for your continued support, and Be sure to follow my profile on Pluralsight so you will be notified as I release new courses

Here is the link to my Pluralsight profile to follow me:

https://www.pluralsight.com/authors/steve-buchanan

Read more

First Docker.com Blog Post – Using Gordon (AI) to Containerize Your Apps and Work with Containers

I’m excited to share that my first official blog post as a Docker Captain has been published on the Docker blog! It’s an honor to contribute to a platform that’s been so foundational in shaping how we build, ship, and run applications today. This first piece dives into Ask Gordon, Docker’s new AI assistant that helps developers go from source code to a running container with less friction and guesswork.

In the post, I walk through how Ask Gordon makes it easier to containerize your applications, even if you’ve never written a Dockerfile before. By analyzing your source code and asking a few smart questions, Ask Gordon generates everything you need to build and run your app in a containerized environment. It’s good for beginners getting started with containers and equally valuable for experienced devs looking to speed up repetitive setup tasks.

One of the things I appreciated most about Ask Gordon is how it bridges the gap between the developer’s intent and the actual container configuration. Rather than copy-pasting snippets from docs or Stack Overflow, the AI gives you context-aware Dockerfiles, Compose files, and clear next steps for your app. It’s a great example of how AI can elevate the developer experience without overcomplicating things.

This is just the beginning of my journey as a Docker Captain, and I’m looking forward to sharing more tutorials, insights, and real-world use cases that can help developers simplify their container workflows. If you haven’t checked it out yet, give my new post a read here:
👉 Containerize Your Apps with Ask Gordon

A big thanks to the Docker team for the warm welcome and opportunity!

Read more