My First Docker Captain Summit Experience

As many of you know, I was honored to be named a Docker Captain earlier this year (2025). This week, I had the incredible opportunity to attend my very first Docker Captain Summit, and what an experience it was.

The event reminded me a bit of the Microsoft MVP Summit, but with even closer access to the Docker product teams across multiple areas. Every year, the Captain Summit takes place in a different location, bringing together Docker staff from product groups, community management, marketing, and DevRel, along with fellow Docker Captains from around the world.

At the summit, we got an inside look at Docker’s roadmap and were among the first to learn about upcoming products and initiatives. We also had the opportunity to provide direct feedback to the product teams, helping shape the future of Docker from the community’s perspective.

This year’s summit was held in Istanbul, and it was a fantastic few days of connecting with so many brilliant people. I finally met in person several Docker staff members and Captains I’ve been collaborating with online. It was also a chance to reunite with friends from Microsoft and the MVP community.

Of course, not everything we discussed can be shared publicly because of NDAs, but I can tell you that we all walked away with some exciting insights and some awesome Docker swag.

Read more

Im Speaking at BITCON 2025 – Easiest Way to Run LLMs Locally: Meet Docker Model Runner

🎤 I’m excited to share that I’ll be returning to BITCON in a week! I will be speaking at BITCON 2025, a gathering focused on Black voices in technology, innovation, and community. You can check out the full speaker lineup here: BITCON 2025 Speakers. The conference this year is virtual and its free. You can check out the site here: https://bitcon.blacksintechnology.net

The conference has a ton of great speakers lined up from some of the largest tech companies such as Google, Microsoft, and more. And to top it off the keynote this year is Kelsey Hightower! You dont want to miss this one.

My Session: “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”
Docker Captain: Steve Buchanan DMR session

At BITCON, I’ll be presenting “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”. In this session, I’ll look at:

  • Why run LLMs locally? The benefits in terms of cost, privacy, latency, and control
  • How Docker Model Runner simplifies things — containerizing large models, managing dependencies, and lowering friction
  • Demo and walkthrough — showing you step by step how to get a model up and running on your own machine or server
  • Best practices, pitfalls, and tips — what I’ve learned building and deploying these systems
  • Q&A / hands-on help — to get you started with your own setup

My goal is that attendees leave with a concrete, reproducible process they can apply right away.

Why It Matters

Large language models (LLMs) are powerful, but running them locally has often felt out of reach for smaller teams, indie devs, or people in resource-constrained environments. With the right tooling (like Docker Model Runner), we can lower that barrier—unlocking more experimentation, more privacy, and more control over where and how inference happens.

I believe this aligns well with the mission of BITCON: elevating voices, demystifying advanced tech, and making it accessible. I hope this talk helps bridge a gap for folks who want to explore AI locally without getting lost in infrastructure.

I am excited to be speaking at BITCON again. To learn more about my session check it out here:

BITCon Session: The Easiest Way to Run LLMs Locally: Meet Docker Model Runner

BITCON is free! Be sure to register today: HERE

Read more

Docker Model Runner Blog Post

I’ve been spending a lot of time blogging on Pluralsight lately, and one of my recent posts covered a topic I’m genuinely excited about: running large language models (LLMs) locally. Specifically, I explored a tool called Docker Model Runner that makes this process more accessible for developers.

In the post, I broke down a few key ideas.

Why Run an LLM Locally

There’s a lot of momentum around cloud-hosted AI services, but running models locally still has its place. For many developers it means more control, quicker experimentation, and the ability to work outside of a cloud provider’s ecosystem.

Tools in This Space

Before zeroing in on Docker Model Runner, I broke down other ways developers are running models locally. The landscape is quickly evolving, and each tool has trade-offs in terms of usability, performance, and compatibility with different models.

Why Docker Model Runner

What really stood out to me with Docker Model Runner is how it lowers the barrier to entry. Instead of wrestling with environment setup, dependencies, and GPU drivers, you can pull down a container and get straight to experimenting. It leans into Docker’s strengths of portability and consistency, so whether you’re on a desktop, laptop, or even testing in a lab environment, the experience is smooth and repeatable.

For developers who are curious about LLMs but don’t want to get bogged down in infrastructure, this tool is a great starting point.


If you want the full breakdown and step-by-step details, you can check out my Pluralsight blog here:
👉 https://www.pluralsight.com/resources/blog/ai-and-data/how-run-llm-locally-desktop

Read more

Officially a Docker Captain!

I’m excited to share some exciting news I’ve officially been recognized as a Docker Captain 🐳!

You can find my Docker Captain profile on the Docker.com website here: https://www.docker.com/captains/steve-buchanan

For those unfamiliar, Docker Captains are a group of handpicked technology leaders who are passionate about Docker and the broader container ecosystem. The program highlights community members who are not only technically sharp but also deeply committed to sharing their knowledge and supporting others in the community. I am honored to join this community of 163 captains globally and 34 in the US. This award is similar to the Microsoft MVP award. The award is annually based.

Being named a Docker Captain is a huge honor. This recognition means a lot to me especially because it’s not just about what you know, but how give back to the community and share with others. Whether it’s speaking at conferences, creating tutorials, helping others get started, or experimenting with the latest container tools, it’s about lifting the community up together!

What This Means

As a Docker Captain, I’ll have access to:

  • Private product briefings with Docker engineers and insiders.
  • Early previews of tools, templates, and content.
  • A private Slack group with other Captains around the world.
  • The opportunity to share what I create with a wider audience through Docker’s channels.
  • A chance to meet the Docker product groups and other Captains once a year.
  • And of course… exclusive Docker swag 😎.

They already sent some cool swag in the welcome package:

But above all, it’s about continuing to give back. I’ve always believed in sharing what I know and helping others level up in tech, and this just fuels that mission even more.

What’s Next

I’ll be using my blog and other platforms to

  • Publish more Docker and container content here.
  • Share real world use cases from the trenches.
  • Highlight new and lesser known tools in the Docker ecosystem (like Ask Gordon/Docker AI, which I recently blogged about).
  • Collaborate with the global Captain crew on exciting community initiatives.

Stay tuned for more. And if you’re just starting your Docker journey, or deep into production workloads, I’d love to hear from you. Let’s connect, collaborate, and continue building awesome things, one container at a time.

A special shout out to Shelley Benhoff and Eva Bojorges for helping with this with award and opportunity! Also thanks to Docker for the warm welcome and to everyone in the community who’s been part of this journey so far. 🚢

Read more