State of App Dev Report by Docker

As devs, platform engineers, and DevOps practitioners, we all feel it: the pace of change is relentless. New tools, new architectures, new expectations, and AI. It can be hard to separate where to invest our time from hype.

That’s exactly why I want decided to write this post about the 2025 Docker State of Application Development Report from Docker.

This report is not marketing fluff. It’s based on insights from over 4,500 developers and engineering professionals and offers a grounded snapshot of how application development is actually evolving today.

Although published in 2025, this report covers long-running trends that continue to shape modern application development. Areas like containerized workflows, cloud-based development environments, AI-assisted tooling, and shared responsibility for security evolve over time rather than changing overnight.

Referencing the 2025 report ahead of the 2026 release provides valuable context. It establishes a baseline for understanding where the industry is coming from, which patterns are proving durable, and which challenges continue to persist. I’ll be looking out for the 2026 report. If you havent checked it out the 2025 report yet you should.

As a Docker Captain, I strongly encourage you to read the full report. But first, here are some of the key takeaways that stood out to me:

Remote-First Development Is Becoming the New Normal

One of the biggest shifts in 2025 is how developers are working:

  • 64% of developers now use non-local development environments as their primary setup
  • Only 36% rely primarily on local machines

That’s a significant change from previous years, and it speaks to the reality that cloud-based workflows, remote dev environments, and tools that unify development environments are now mainstream. This shift isn’t just a trend — it’s redefining how teams collaborate and deliver software efficiently.


Developer Productivity Still Faces Friction Points

The report highlights that, despite improvements in tooling and culture, many teams still experience bottlenecks in everyday work:

  • Pull requests stuck in review
  • Tasks without clear estimates
  • Slowdowns in the “inner development loop”

Even with great culture and tooling, friction still exists, especially around planning and execution. Knowing where dev productivity stalls helps us focus improvements where they matter most.


Learning Is Shifting to Self-Guided, Online Resources

Developers are reinventing how they learn:

  • 85% of respondents use online courses or certifications
  • Traditional sources like books or on-the-job training are less dominant

This highlights a bigger trend in continuous learning and self-driven skill development — especially important as the pace of change in languages, platforms, and architectures continues to accelerate.


AI Adoption Is Real, But Not Uniform

AI continues to influence how software is built, but adoption is still uneven:

  • Some teams are deeply integrating AI tools
  • Others are more cautious or selective

The report frames AI as an enabler, not a magic bullet. Developers are using AI to assist with documentation, research, and repetitive tasks, but real productivity gains depend on meaningful integration into workflows and data quality.


Security Is a True Team Effort

Security is no longer siloed:

  • Teams of all sizes report that developers, leads, and operations are involved in security
  • Only a small fraction of organizations outsource security entirely

The idea that “security is someone else’s job” is gone — fixing vulnerabilities and embedding security thinking into the development lifecycle is now a collective responsibility.


What This All Means for Developers

Taken together, these findings show a software landscape that’s:

  • More distributed and cloud-native
  • More self-taught and adaptable
  • More collaborative around security
  • Still facing persistent productivity barriers

These trends have real implications for how we build teams, invest in tooling, and think about developer experience.


Go Read the Full Report

The 2025 Docker State of Application Development Report is packed with additional insights, data, and analysis. Whether you’re a developer curious about AI adoption, a manager thinking about remote workflows, or a team lead prioritizing security practices, there’s something in this report for you.

Check out the full report on Docker’s blog:
https://www.docker.com/blog/2025-docker-state-of-app-dev

Read more

Azure Hub-and-Spoke Architecture Explained and Automated with OpenTofu

This is my first blog of the new year (2026)! Since being re-awarded as a Microsoft MVP, Microsoft provided me with a fresh set of Azure credits. One of the first things I wanted to do was rebuild my Azure lab environment. This time, I wanted to do it the right way. I wanted it to mirror how I would design and deploy a real enterprise environment, including running fully on private endpoints and following a proper hub-and-spoke network model.

Just as importantly, I wanted everything defined in Infrastructure as Code (IaC) so I could spin environments up and down whenever I needed. That also aligns perfectly with what my team at Jamf is working on right now. We are making some changes to our underlying Azure architecture, including deeper network isolation, security controls, intergration with Jamf security cloud security products, and a shift from Bicep to OpenTofu. We will also be using AI agents to do a lot of the heavy lifting in that refactor. I will be sharing more about that in future blogs and talks as much as I am able to publicly.

Because OpenTofu is at the center of that work, I decided to build my entire Azure lab using OpenTofu and a full hub-and-spoke architecture. This gives my team a real, working reference base implementation that we can build on for production designs. I also want to share this with the larger tech community.

If you are note familiar with OpenTofu it is an open source infrastructure-as-code engine based on Terraform that lets you define, deploy, and manage cloud infrastructure using declarative configuration files, and you can learn more at https://opentofu.org.

You can access the GitHub Repository of my “OpenTofu Azure Hub and Spoke” solution here: https://github.com/Buchatech/OpenTofu-Azure-HubSpoke-public

Lets break down whats in the solution I built.


Solution Architecture

The solution deploys a production-style Azure network and platform foundation that includes:

  • Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
  • Spoke VNet with peering and default routes through the firewall
  • Key Vault and Azure Container Registry using private endpoints
  • Optional Jumpbox VM for secure management access
  • GitHub Actions CI/CD pipeline using OIDC authentication

How the Automation Works

This is a multi-part solution built around a bootstrap Bash script (bootstrap.sh) and a fully generated OpenTofu repository.

The bootstrap script creates everything you need to get started:

  1. It creates an Azure Storage Account to store your OpenTofu remote state.
  2. It generates a complete OpenTofu project, including modules, variables, and environment structure.
  3. It configures the backend so OpenTofu uses Azure Storage for state.
  4. It creates a ready-to-use GitHub Actions pipeline for CI/CD.

Once the repository is generated, you can deploy your Azure environment by running OpenTofu locally or by pushing the repo to GitHub and letting the pipeline handle deployments for you. Within minutes, you can have a fully functional Azure hub-and-spoke environment up and running, and you can customize the generated modules to fit your own requirements.


Deployment Modes

The bootstrap bash script supports two deployment modes depending on how advanced and locked-down you want the environment to be.

FULL Mode (Default)
This is the enterprise-grade option.

  • Hub VNet with Azure Firewall, VPN Gateway, and DNS Private Resolver
  • Spoke VNet with peering and default route through the firewall
  • Private endpoints for Key Vault and Azure Container Registry
  • Optional Jumpbox VM for secure management
  • GitHub Actions CI/CD pipeline with OIDC authentication

BASIC Mode
This is a simpler version for learning or labs.

  • Hub VNet with Azure Firewall only
  • Spoke VNet with peering and default route through the firewall
  • Public access for Key Vault and Azure Container Registry
  • No Jumpbox, VPN Gateway, or DNS Private Resolver
  • GitHub Actions CI/CD pipeline with OIDC authentication

What the bootstrap.sh Script Does

When you run the bootstrap script, it will:

  1. Prompt you to select FULL or BASIC deployment mode
  2. Create an Azure Storage Account for OpenTofu remote state in rg-tfstate
  3. Generate the full OpenTofu repository structure based on your choice
  4. Configure the OpenTofu backend to use the storage account
  5. Create GitHub Actions workflow files for CI/CD
  6. Output the storage account details and the GitHub secrets you need to configure

From there, you are ready to deploy and customize the script and OpenTofu based on your Azure hub-and-spoke environment entirely through code.

Here is the Readme from the repo. It goes even more in depth into my “OpenTofu Azure Hub and Spoke” solution. I hope you find it useful!

********************************************************************************

Azure Hub-Spoke with OpenTofu

Azure base network architecture solution

This repository contains a production-ready, modular OpenTofu configuration that deploys Azure hub-spoke network topology with two deployment modes (private or public) to match your requirements and budget.


Architecture Overview

This solution deploys a hub-and-spoke network architecture (visual shows full-private deployment):

Enterprise-grade Azure network architecture lab environment with Site-to-Site VPN, Azure Firewall, DNS Private Resolver, and core services

This repository contains a production-ready, modular OpenTofu (Terraform) configuration that deploys a complete Azure hub-spoke network topology designed for hybrid cloud scenarios, connecting your on-premises network (e.g., UniFi network) to Azure.

Architecture Overview

This lab deploys a hub-and-spoke network architecture following Azure best practices (visual shows full private deployment):

┌──────────────────────────────────────────────────────────────────────┐
│                            AZURE CLOUD                                │
│                                                                        │
│  ┌─── HUB VNet (rg-lab-hub-network) ────────────────────────┐        │
│  │ 10.10.0.0/16                                              │        │
│  │                                                            │        │
│  │  ┌──────────┐  ┌───────────┐  ┌────────────┐  ┌───────┐ │        │
│  │  │  Azure   │  │    VPN    │  │    DNS     │  │Jumpbox│ │        │
│  │  │ Firewall │  │  Gateway  │  │  Private   │  │  VM   │ │        │
│  │  │(10.10.1.0│  │(10.10.2.0)│  │  Resolver  │  │(Mgmt) │ │        │
│  │  │)+ DNAT   │  │           │  │(10.10.4-5.0│  │subnet │ │        │
│  │  │SSH:2222  │  │           │  │)           │  │       │ │        │
│  │  └─────┬────┘  └─────┬─────┘  └────────────┘  └───────┘ │        │
│  │        │             │                                     │        │
│  │        │             │  Site-to-Site VPN                  │        │
│  └────────┼─────────────┼─────────────────────────────────────┘        │
│           │             │                                               │
│           │  VNet Peering + Gateway Transit                            │
│           │             │                                               │
│  ┌────────▼─ SPOKE VNet (rg-lab-spoke1-network) ──────┐               │
│  │ 10.20.0.0/16                                        │               │
│  │                                                      │               │
│  │  ┌──────────┐  ┌──────────┐  ┌──────────────────┐ │               │
│  │  │   Apps   │  │   APIs   │  │   Data/Services  │ │               │
│  │  │ Subnet   │  │ Subnet   │  │     Subnet       │ │               │
│  │  │          │  │          │  │  - ACR (Private) │ │               │
│  │  │          │  │          │  │  - Key Vault     │ │               │
│  │  └──────────┘  └──────────┘  └──────────────────┘ │               │
│  │                                                      │               │
│  │  Traffic routed through Azure Firewall ─────────────┘               │
│  └──────────────────────────────────────────────────────               │
│                                                                         │
│  ┌─── Management RG (rg-lab-management) ────────────┐                 │
│  │  - Azure Container Registry (ACR)                 │                 │
│  │  - Azure Key Vault                                 │                 │
│  │  - Private Endpoints in Spoke Data subnet         │                 │
│  └────────────────────────────────────────────────────┘                 │
│                                                                         │
└─────────────────────────────┬───────────────────────────────────────────┘
                              │
                      S2S VPN Tunnel (IPsec)
                              │
              ┌───────────────▼──────────────┐
              │   ON-PREMISES NETWORK        │
              │   (e.g., UniFi Router)       │
              │   192.168.1.0/24             │
              │                              │
              │   SSH → Azure Firewall:2222  │
              │   → DNAT → Jumpbox:22        │
              └──────────────────────────────┘

Read more

Docker Hardened Images Are Now Free: What This Means for Developers and Platform Teams

Last week Docker made a big move for the container ecosystem. Docker Hardened Images (DHI) are now free and open source, making secure container foundations accessible to everyone.

If you build, deploy, or operate containerized workloads, this is one of those changes that quietly but meaningfully improves day to day security and reliability.

Let’s break down what Docker Hardened Images are, why they matter, and how you can start using them today.

What Are Docker Hardened Images?

Docker Hardened Images are base container images that come pre-hardened for security and transparency. Instead of starting from a generic base image and layering on your own security practices, DHI gives you a safer starting point out of the box.

They are designed to reduce common container risks without adding operational overhead or complexity.

In practical terms, this means Docker has already done the work many teams struggle to keep up with.


What You Get Out of the Box

When you use Docker Hardened Images, your base images now:

  • Include automated security metadata
  • Are minimalist and optimized for faster builds and startup times
  • Contain significantly fewer known vulnerabilities (CVEs) from the start
  • Are fully free and open source

This shifts container security left, right to the foundation of your application images.

There still is a paid version of Docker Hardened Images for those that have enterprise needs. Here is a breakdown of what you get with the Free Docker Hardened Images and the Paid version.


Why This Is a Big Deal

Most container vulnerabilities originate from base images. Teams often inherit outdated packages, unused libraries, or poorly maintained dependencies without realizing it.

Docker Hardened Images help address that by:

  • Reducing the attack surface before you write any application code
  • Improving transparency into what is inside your images
  • Lowering the burden on platform and security teams
  • Making secure defaults accessible even to small teams and solo developers

Security becomes the baseline rather than an afterthought.

Read more

My First Docker Captain Summit Experience

As many of you know, I was honored to be named a Docker Captain earlier this year (2025). This week, I had the incredible opportunity to attend my very first Docker Captain Summit, and what an experience it was.

The event reminded me a bit of the Microsoft MVP Summit, but with even closer access to the Docker product teams across multiple areas. Every year, the Captain Summit takes place in a different location, bringing together Docker staff from product groups, community management, marketing, and DevRel, along with fellow Docker Captains from around the world.

At the summit, we got an inside look at Docker’s roadmap and were among the first to learn about upcoming products and initiatives. We also had the opportunity to provide direct feedback to the product teams, helping shape the future of Docker from the community’s perspective.

This year’s summit was held in Istanbul, and it was a fantastic few days of connecting with so many brilliant people. I finally met in person several Docker staff members and Captains I’ve been collaborating with online. It was also a chance to reunite with friends from Microsoft and the MVP community.

Of course, not everything we discussed can be shared publicly because of NDAs, but I can tell you that we all walked away with some exciting insights and some awesome Docker swag.

Read more

Im Speaking at BITCON 2025 – Easiest Way to Run LLMs Locally: Meet Docker Model Runner

🎤 I’m excited to share that I’ll be returning to BITCON in a week! I will be speaking at BITCON 2025, a gathering focused on Black voices in technology, innovation, and community. You can check out the full speaker lineup here: BITCON 2025 Speakers. The conference this year is virtual and its free. You can check out the site here: https://bitcon.blacksintechnology.net

The conference has a ton of great speakers lined up from some of the largest tech companies such as Google, Microsoft, and more. And to top it off the keynote this year is Kelsey Hightower! You dont want to miss this one.

My Session: “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”
Docker Captain: Steve Buchanan DMR session

At BITCON, I’ll be presenting “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”. In this session, I’ll look at:

  • Why run LLMs locally? The benefits in terms of cost, privacy, latency, and control
  • How Docker Model Runner simplifies things — containerizing large models, managing dependencies, and lowering friction
  • Demo and walkthrough — showing you step by step how to get a model up and running on your own machine or server
  • Best practices, pitfalls, and tips — what I’ve learned building and deploying these systems
  • Q&A / hands-on help — to get you started with your own setup

My goal is that attendees leave with a concrete, reproducible process they can apply right away.

Why It Matters

Large language models (LLMs) are powerful, but running them locally has often felt out of reach for smaller teams, indie devs, or people in resource-constrained environments. With the right tooling (like Docker Model Runner), we can lower that barrier—unlocking more experimentation, more privacy, and more control over where and how inference happens.

I believe this aligns well with the mission of BITCON: elevating voices, demystifying advanced tech, and making it accessible. I hope this talk helps bridge a gap for folks who want to explore AI locally without getting lost in infrastructure.

I am excited to be speaking at BITCON again. To learn more about my session check it out here:

BITCon Session: The Easiest Way to Run LLMs Locally: Meet Docker Model Runner

BITCON is free! Be sure to register today: HERE

Read more

Recent Blog Posts: MCP Servers, Dev, Multi-cloud Mastery, and Cloud Engineer Resumes

This is a shorter post, but I wanted to take a moment to share what I’ve been working on lately. Over the past few months I’ve been publishing a steady stream of blog posts on Pluralsight, covering topics across cloud, AI, JavaScript, and beyond. There’s a lot happening in tech right now, and I’ve been fortunate to collaborate with the Pluralsight team to dive into some of these exciting areas:

Check out an overview the blog posts and use the the following links to read more:

Behind the Buzzword: What is MCP (MCP Server)?
A breakdown of MCP servers and why they matter in the evolving landscape of AI.
👉 Read the post

How to Run an LLM Locally on Your Desktop
Exploring why and how you might want to run a large language model on your own machine, with a closer look at Docker Model Runner.
👉 Read the post

What to Emphasize on Your Resume as a Cloud Engineer
Tips on showcasing the skills that make cloud engineers stand out in today’s job market.
👉 Read the post

Multicloud Mastery: How to Train Teams in AWS, Azure, and GCP
Practical advice on enabling engineering teams to work across multiple clouds with confidence.
👉 Read the post

6 Cloud Cost Optimization Strategies and Tools for AWS, Azure, and GCP
A set of proven strategies and tools to help control and reduce cloud spend.
👉 Read the post

How to Add User Authentication to Your JavaScript App
A straightforward guide to securing your JavaScript applications with simple authentication techniques.
👉 Read the post

I’ll be continuing to publish more content in the months ahead, so stay tuned for future posts on cloud-native engineering, AI, and practical developer skills. If you found these articles useful, I’d love for you to check them out and share them with your network.

Read more

Docker Model Runner Blog Post

I’ve been spending a lot of time blogging on Pluralsight lately, and one of my recent posts covered a topic I’m genuinely excited about: running large language models (LLMs) locally. Specifically, I explored a tool called Docker Model Runner that makes this process more accessible for developers.

In the post, I broke down a few key ideas.

Why Run an LLM Locally

There’s a lot of momentum around cloud-hosted AI services, but running models locally still has its place. For many developers it means more control, quicker experimentation, and the ability to work outside of a cloud provider’s ecosystem.

Tools in This Space

Before zeroing in on Docker Model Runner, I broke down other ways developers are running models locally. The landscape is quickly evolving, and each tool has trade-offs in terms of usability, performance, and compatibility with different models.

Why Docker Model Runner

What really stood out to me with Docker Model Runner is how it lowers the barrier to entry. Instead of wrestling with environment setup, dependencies, and GPU drivers, you can pull down a container and get straight to experimenting. It leans into Docker’s strengths of portability and consistency, so whether you’re on a desktop, laptop, or even testing in a lab environment, the experience is smooth and repeatable.

For developers who are curious about LLMs but don’t want to get bogged down in infrastructure, this tool is a great starting point.


If you want the full breakdown and step-by-step details, you can check out my Pluralsight blog here:
👉 https://www.pluralsight.com/resources/blog/ai-and-data/how-run-llm-locally-desktop

Read more

First Docker.com Blog Post – Using Gordon (AI) to Containerize Your Apps and Work with Containers

I’m excited to share that my first official blog post as a Docker Captain has been published on the Docker blog! It’s an honor to contribute to a platform that’s been so foundational in shaping how we build, ship, and run applications today. This first piece dives into Ask Gordon, Docker’s new AI assistant that helps developers go from source code to a running container with less friction and guesswork.

In the post, I walk through how Ask Gordon makes it easier to containerize your applications, even if you’ve never written a Dockerfile before. By analyzing your source code and asking a few smart questions, Ask Gordon generates everything you need to build and run your app in a containerized environment. It’s good for beginners getting started with containers and equally valuable for experienced devs looking to speed up repetitive setup tasks.

One of the things I appreciated most about Ask Gordon is how it bridges the gap between the developer’s intent and the actual container configuration. Rather than copy-pasting snippets from docs or Stack Overflow, the AI gives you context-aware Dockerfiles, Compose files, and clear next steps for your app. It’s a great example of how AI can elevate the developer experience without overcomplicating things.

This is just the beginning of my journey as a Docker Captain, and I’m looking forward to sharing more tutorials, insights, and real-world use cases that can help developers simplify their container workflows. If you haven’t checked it out yet, give my new post a read here:
👉 Containerize Your Apps with Ask Gordon

A big thanks to the Docker team for the warm welcome and opportunity!

Read more

Officially a Docker Captain!

I’m excited to share some exciting news I’ve officially been recognized as a Docker Captain 🐳!

You can find my Docker Captain profile on the Docker.com website here: https://www.docker.com/captains/steve-buchanan

For those unfamiliar, Docker Captains are a group of handpicked technology leaders who are passionate about Docker and the broader container ecosystem. The program highlights community members who are not only technically sharp but also deeply committed to sharing their knowledge and supporting others in the community. I am honored to join this community of 163 captains globally and 34 in the US. This award is similar to the Microsoft MVP award. The award is annually based.

Being named a Docker Captain is a huge honor. This recognition means a lot to me especially because it’s not just about what you know, but how give back to the community and share with others. Whether it’s speaking at conferences, creating tutorials, helping others get started, or experimenting with the latest container tools, it’s about lifting the community up together!

What This Means

As a Docker Captain, I’ll have access to:

  • Private product briefings with Docker engineers and insiders.
  • Early previews of tools, templates, and content.
  • A private Slack group with other Captains around the world.
  • The opportunity to share what I create with a wider audience through Docker’s channels.
  • A chance to meet the Docker product groups and other Captains once a year.
  • And of course… exclusive Docker swag 😎.

They already sent some cool swag in the welcome package:

But above all, it’s about continuing to give back. I’ve always believed in sharing what I know and helping others level up in tech, and this just fuels that mission even more.

What’s Next

I’ll be using my blog and other platforms to

  • Publish more Docker and container content here.
  • Share real world use cases from the trenches.
  • Highlight new and lesser known tools in the Docker ecosystem (like Ask Gordon/Docker AI, which I recently blogged about).
  • Collaborate with the global Captain crew on exciting community initiatives.

Stay tuned for more. And if you’re just starting your Docker journey, or deep into production workloads, I’d love to hear from you. Let’s connect, collaborate, and continue building awesome things, one container at a time.

A special shout out to Shelley Benhoff and Eva Bojorges for helping with this with award and opportunity! Also thanks to Docker for the warm welcome and to everyone in the community who’s been part of this journey so far. 🚢

Read more

My 9th Book – The Modern Developer Experience Published on O’Reilly!

I’m thrilled to share that my latest book, The Modern Developer Experience (ISBN: 9781098169695), is now available on O’Reilly! 🎉It is a shorter book known as a report with 4 chapters total. You can read the book on O’Reilly’s learning platform.

I am excited about this book because in today’s fast-paced tech world, developers don’t just write code, they navigate cloud platforms, cloud native tools and frameworks, integrate AI, automate workflows, and collaborate across teams to drive innovation. This book is a deep dive into the evolving role of developers and how modern tools, frameworks, and methodologies are shaping the future of software engineering.

Here is the offical book description:

DevOps has delivered transformative changes to tooling and processes, but with it comes new layers of complexity. More modern frameworks and tools, like containers, Docker, Kubernetes, Platform Engineering, GitOps, and AI can accelerate development, but understanding their unique challenges (and how to address them effectively) can make the difference between a team that struggles and one that thrives.

This report explores how organizations can improve the developer experience (DevEx) by reducing complexity, streamlining workflows, and fostering supportive environments. Whether your organization is deeply invested in DevOps or simply looking to improve team performance, this report highlights strategies to elevate your development practices and outcomes.

Here are the chapters:
1. The Modern Developer Experience

2. Raising the Bar, Providing the Right Developer Environment

3. Using AI to Enhance DevEx

4. Developer Experience and the Secure Supply Chain

📖 Whether you’re a developer, team lead, or engineering manager, this book will help you refine your processes and create an environment where developers can thrive.

🔗 Check it out here: The Modern Developer Experience on O’Reilly

Read more