As many of you know, I was honored to be named a Docker Captain earlier this year (2025). This week, I had the incredible opportunity to attend my very first Docker Captain Summit, and what an experience it was.
The event reminded me a bit of the Microsoft MVP Summit, but with even closer access to the Docker product teams across multiple areas. Every year, the Captain Summit takes place in a different location, bringing together Docker staff from product groups, community management, marketing, and DevRel, along with fellow Docker Captains from around the world.
At the summit, we got an inside look at Docker’s roadmap and were among the first to learn about upcoming products and initiatives. We also had the opportunity to provide direct feedback to the product teams, helping shape the future of Docker from the community’s perspective.
This year’s summit was held in Istanbul, and it was a fantastic few days of connecting with so many brilliant people. I finally met in person several Docker staff members and Captains I’ve been collaborating with online. It was also a chance to reunite with friends from Microsoft and the MVP community.
Of course, not everything we discussed can be shared publicly because of NDAs, but I can tell you that we all walked away with some exciting insights and some awesome Docker swag.
🎤 I’m excited to share that I’ll be returning to BITCON in a week! I will be speaking at BITCON 2025, a gathering focused on Black voices in technology, innovation, and community. You can check out the full speaker lineup here: BITCON 2025 Speakers. The conference this year is virtual and its free. You can check out the site here: https://bitcon.blacksintechnology.net
The conference has a ton of great speakers lined up from some of the largest tech companies such as Google, Microsoft, and more. And to top it off the keynote this year is Kelsey Hightower! You dont want to miss this one.
My Session: “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”
Docker Captain: Steve Buchanan DMR session
At BITCON, I’ll be presenting “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”. In this session, I’ll look at:
Why run LLMs locally? The benefits in terms of cost, privacy, latency, and control
How Docker Model Runner simplifies things — containerizing large models, managing dependencies, and lowering friction
Demo and walkthrough — showing you step by step how to get a model up and running on your own machine or server
Best practices, pitfalls, and tips — what I’ve learned building and deploying these systems
Q&A / hands-on help — to get you started with your own setup
My goal is that attendees leave with a concrete, reproducible process they can apply right away.
Why It Matters
Large language models (LLMs) are powerful, but running them locally has often felt out of reach for smaller teams, indie devs, or people in resource-constrained environments. With the right tooling (like Docker Model Runner), we can lower that barrier—unlocking more experimentation, more privacy, and more control over where and how inference happens.
I believe this aligns well with the mission of BITCON: elevating voices, demystifying advanced tech, and making it accessible. I hope this talk helps bridge a gap for folks who want to explore AI locally without getting lost in infrastructure.
I am excited to be speaking at BITCON again. To learn more about my session check it out here:
I’ve been spending a lot of time blogging on Pluralsight lately, and one of my recent posts covered a topic I’m genuinely excited about: running large language models (LLMs) locally. Specifically, I explored a tool called Docker Model Runner that makes this process more accessible for developers.
In the post, I broke down a few key ideas.
Why Run an LLM Locally
There’s a lot of momentum around cloud-hosted AI services, but running models locally still has its place. For many developers it means more control, quicker experimentation, and the ability to work outside of a cloud provider’s ecosystem.
Tools in This Space
Before zeroing in on Docker Model Runner, I broke down other ways developers are running models locally. The landscape is quickly evolving, and each tool has trade-offs in terms of usability, performance, and compatibility with different models.
Why Docker Model Runner
What really stood out to me with Docker Model Runner is how it lowers the barrier to entry. Instead of wrestling with environment setup, dependencies, and GPU drivers, you can pull down a container and get straight to experimenting. It leans into Docker’s strengths of portability and consistency, so whether you’re on a desktop, laptop, or even testing in a lab environment, the experience is smooth and repeatable.
For developers who are curious about LLMs but don’t want to get bogged down in infrastructure, this tool is a great starting point.
I’m excited to share that my first official blog post as a Docker Captain has been published on the Docker blog! It’s an honor to contribute to a platform that’s been so foundational in shaping how we build, ship, and run applications today. This first piece dives into Ask Gordon, Docker’s new AI assistant that helps developers go from source code to a running container with less friction and guesswork.
In the post, I walk through how Ask Gordon makes it easier to containerize your applications, even if you’ve never written a Dockerfile before. By analyzing your source code and asking a few smart questions, Ask Gordon generates everything you need to build and run your app in a containerized environment. It’s good for beginners getting started with containers and equally valuable for experienced devs looking to speed up repetitive setup tasks.
One of the things I appreciated most about Ask Gordon is how it bridges the gap between the developer’s intent and the actual container configuration. Rather than copy-pasting snippets from docs or Stack Overflow, the AI gives you context-aware Dockerfiles, Compose files, and clear next steps for your app. It’s a great example of how AI can elevate the developer experience without overcomplicating things.
This is just the beginning of my journey as a Docker Captain, and I’m looking forward to sharing more tutorials, insights, and real-world use cases that can help developers simplify their container workflows. If you haven’t checked it out yet, give my new post a read here: 👉 Containerize Your Apps with Ask Gordon
A big thanks to the Docker team for the warm welcome and opportunity!
For those unfamiliar, Docker Captains are a group of handpicked technology leaders who are passionate about Docker and the broader container ecosystem. The program highlights community members who are not only technically sharp but also deeply committed to sharing their knowledge and supporting others in the community. I am honored to join this community of 163 captains globally and 34 in the US. This award is similar to the Microsoft MVP award. The award is annually based.
Being named a Docker Captain is a huge honor. This recognition means a lot to me especially because it’s not just about what you know, but how give back to the community and share with others. Whether it’s speaking at conferences, creating tutorials, helping others get started, or experimenting with the latest container tools, it’s about lifting the community up together!
What This Means
As a Docker Captain, I’ll have access to:
Private product briefings with Docker engineers and insiders.
Early previews of tools, templates, and content.
A private Slack group with other Captains around the world.
The opportunity to share what I create with a wider audience through Docker’s channels.
A chance to meet the Docker product groups and other Captains once a year.
And of course… exclusive Docker swag 😎.
They already sent some cool swag in the welcome package:
But above all, it’s about continuing to give back. I’ve always believed in sharing what I know and helping others level up in tech, and this just fuels that mission even more.
What’s Next
I’ll be using my blog and other platforms to
Publish more Docker and container content here.
Share real world use cases from the trenches.
Highlight new and lesser known tools in the Docker ecosystem (like Ask Gordon/Docker AI, which I recently blogged about).
Collaborate with the global Captain crew on exciting community initiatives.
Stay tuned for more. And if you’re just starting your Docker journey, or deep into production workloads, I’d love to hear from you. Let’s connect, collaborate, and continue building awesome things, one container at a time.
A special shout out to Shelley Benhoff and Eva Bojorges for helping with this with award and opportunity! Also thanks to Docker for the warm welcome and to everyone in the community who’s been part of this journey so far. 🚢
I’m thrilled to share that my latest book, The Modern Developer Experience (ISBN: 9781098169695), is now available on O’Reilly! 🎉It is a shorter book known as a report with 4 chapters total. You can read the book on O’Reilly’s learning platform.
I am excited about this book because in today’s fast-paced tech world, developers don’t just write code, they navigate cloud platforms, cloud native tools and frameworks, integrate AI, automate workflows, and collaborate across teams to drive innovation. This book is a deep dive into the evolving role of developers and how modern tools, frameworks, and methodologies are shaping the future of software engineering.
Here is the offical book description:
DevOps has delivered transformative changes to tooling and processes, but with it comes new layers of complexity. More modern frameworks and tools, like containers, Docker, Kubernetes, Platform Engineering, GitOps, and AI can accelerate development, but understanding their unique challenges (and how to address them effectively) can make the difference between a team that struggles and one that thrives.
This report explores how organizations can improve the developer experience (DevEx) by reducing complexity, streamlining workflows, and fostering supportive environments. Whether your organization is deeply invested in DevOps or simply looking to improve team performance, this report highlights strategies to elevate your development practices and outcomes.
Here are the chapters: 1. The Modern Developer Experience
2. Raising the Bar, Providing the Right Developer Environment
4. Developer Experience and the Secure Supply Chain
📖 Whether you’re a developer, team lead, or engineering manager, this book will help you refine your processes and create an environment where developers can thrive.
I am happy to share a new episode of Azure Friday. It was an honor to appear along side Senior Product Manager Rajat Shrivastava in this episode to talk about AKS Backup. I this episode we joined Scott Hanselman to explore the functionality of AKS backup in safeguarding containerized apps and their data on AKS.
Backup is frequently overlooked, only gaining significance when a failure necessitates recovery. In the realm of Containers and Kubernetes, it is often perceived as unnecessary. However, the reality is that backups are essential even for containerized environments. Microsoft has introduced a backup solution for Azure Kubernetes Service (AKS) and its workloads, leveraging Azure Backup.
In this episode we dove into the importance of backing up containers, even when they are predominantly stateless. The episode sheds light on why safeguarding containers is crucial and provides insights into the workings of AKS backup in ensuring the protection of workloads running on AKS.
In the episode we also explore questions you may have about backing up K8s and we dive into demos showing how to protect AKS with AKS backup and how to do a restore. We even took time to answer this common question “Do I really need to backup my K8s cluster if I am running stateless apps & have everything in code i.e. IaC, CI/CD, or GitOps?”. The answer is yes. In fact one should think of it this way: “GitOps & K8s Backup are like Seatbelts & Airbags”. Here is a graphic to break this down further:
Many organizations have embraced DevOps and adopted technologies like Kubernetes, cloud computing, and Infrastructure as Code (IaC) tools like Terraform or Pulumi. Despite these efforts, they often face challenges in delivering on the promises of DevOps and cloud-native. Platform engineering has emerged as the next step in the evolution, breaking down barriers and empowering developers to bring software to the market faster and more efficiently.
Recently I have been working on content to help educate and share my knowledge in this space. I am happy to announce two new pieces of content on Platform Engineering including a new course and a new blog.
Course: Platform Engineering: The Big Picture
Last week my 22nd course was published on Pluralsight! I am really excited about this course because it covers something that has been really hot in tech lately. It is about Platform Engineering. Platform Engineering has emerged as the next step in the evolution, breaking down barriers and empowering teams. Being someone that works with Kubernetes and cloud native this course was right up my alley because I work directly in this space.
The course is titled “Platform Engineering: The Big Picture“. This course will help you explore platform engineering and discover how it can elevate cloud-native development, making developers’ lives easier while achieving new heights in software delivery. Platform Engineering unifies and centralizes toolchains & workflows for self-service making developers’ lives easier while achieving new heights in software delivery.
In this course, you will gain an understanding about Platform Engineering, its benefits, architecture, tooling, workflow and how to adopt it.
Some of the major topics covered in the course include:
A Platform Engineering overview and why it’s needed, how Platforms enhance DevOps and streamline cloud native.
A comparison of DevOps, SRE, and Platform Engineering.
You will learn about Platform Engineering Architecture, its tooling landscape, and Internal Developer Platforms.
Check out the “Platform Engineering: The Big Picture“ course here:
I hope you find value in this new Platform Engineering course. Be sure to follow my profile on Pluralsight so you will be notified as I release new courses!
Here is the link to my Pluralsight profile to follow me:
Blog: 8 tools every platform engineer should know about
I am also excited to announce my second Platform Engineering-related blog post on Pluralsight. This one is titled: “8 tools every platform engineer should know about”. In Platform Engineering there are a lot of tools that can make up a platform. It can be confusing and hard to know what tools to focus on in the Platform Engineering space. In this blog post, I list 8 tools that are a must-know when you are in the Platform Engineering space.
Hey everyone, today I’m super excited to tell you about a recent episode of Azure Friday that I was lucky enough to be a guest on.
Azure Friday is a weekly video series hosted by the legendary Scott Hanselman, where he interviews experts and developers on various Azure-related topics. In this episode, we talked about Automated Deployments for AKS, a new feature that makes it super easy to deploy your apps to Azure Kubernetes Service.
If you’re not familiar with AKS, it’s a managed Kubernetes service that lets you run containerized applications on Azure without having to worry about the complexity of managing the cluster. It’s a great way to scale your apps and take advantage of the benefits of Kubernetes, such as high availability, load balancing, and service discovery.
But what if you’re not familiar with containers or Kubernetes? What if you just have some code in a GitHub repo and you want to run it on AKS? That’s where Automated Deployments for AKS come in. It’s a feature that simplifies the Kubernetes development process by taking care of the tedious work of containerization for you. It uses a tool called Draft, which automatically detects the language and framework of your app, creates a Dockerfile and a Helm chart for you, builds and pushes the image to Azure Container Registry, and deploys the app to AKS. All with just a few clicks in the Azure Portal.
Sounds amazing, right? Well, that’s what I wanted to show Scott in this episode. I had an app hosted in a GitHub repo that I wanted to run on AKS. The app was a simple web app that displayed some data from a database. I had already created a few resources in Azure, such as a resource group, an Azure Container Registry, and an AKS cluster. All I needed to do was use Automated Deployments for AKS to get this app from code to running on a cluster.
So how did it go? Well, you’ll have to watch the episode to find out. But spoiler alert: it was super easy and fast. In just a few commands, I went from code to an app running on AKS. Scott was impressed and so was I. We had a great time chatting about how Automated Deployments for AKS works under the hood, some of the benefits and limitations of using it, and how it can help developers get started with containers and Kubernetes.
With Automated Deployments, Microsoft is opening up new avenues for developers to embrace the power of containers and AKS, enabling them to effortlessly build scalable and robust applications.
If you’re interested in learning more about Automated Deployments for AKS, you can check out the documentation here: https://learn.microsoft.com/en-us/azure/aks/automated-deployments. It’s available today in public preview, so you can try it out for yourself and see how easy it is to run your apps on AKS.
That’s all for today. I hope you enjoy this episode of Azure Friday as much as I did. It was an honor and a pleasure to be a guest on Scott’s show and talk about one of my favorite topics: Azure Kubernetes Service. If you have any questions or feedback, feel free to leave a comment or reach out to me on Twitter at @Buchatech. Thanks for reading and happy coding!
I was a guest on a very popular cloud podcast. This is one of the longest-running cloud podcasts around starting in 2011. It is the Cloudcast Podcast.
I was on episode #714 titled “Combining Kubernetes Community and Careers”. In this episode, I had a great time chatting with Aaron Delp about my journey in the Kubernetes community, building a personal brand through education and sharing, content creation, and maintaining a healthy work-life balance.
Here are the show notes breaking down the topics:
Topic 1 – Today we are going to be talking about careers and Kubernetes. Steve, welcome to the show! You have a super fascinating career journey, can you give everyone a quick introduction?
Topic 2 – I heard you over on the Kubernetes Unpacked podcast. First off, it’s hard to keep up with everything you are doing in the community these days. What is your current focus and passion? Have you reached 20 courses on Pluralsight yet?!
Topic 3 – How do you balance the day job (Program Manager for AKS) and the nights and weekends (PluralSight courses, blogging, podcasts, etc.)? Besides learning and sharing, what benefits are you seeing with this approach?
Topic 4 – I believe your journey parallels our journey here. We started the podcast to learn and give back to the community. Prior to the podcast, blogging was the big thing (we are completely aging ourselves I know) but I think it is safe to say blogging isn’t a primary source today. How would you recommend folks new to the industry get started sharing their journey? Where is the most “bang for your buck” these days?
Topic 5 – Let’s talk about Kubernetes and specifically AKS, what are customers finding new and interesting? What are the leading solutions and integrations you see combined with AKS? How do you create a “stack” in AKS (GitHub Actions, Azure Container Registry, etc.)