🎤 I’m excited to share that I’ll be returning to BITCON in a week! I will be speaking at BITCON 2025, a gathering focused on Black voices in technology, innovation, and community. You can check out the full speaker lineup here: BITCON 2025 Speakers. The conference this year is virtual and its free. You can check out the site here: https://bitcon.blacksintechnology.net

The conference has a ton of great speakers lined up from some of the largest tech companies such as Google, Microsoft, and more. And to top it off the keynote this year is Kelsey Hightower! You dont want to miss this one.

My Session: “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”

At BITCON, I’ll be presenting “The Easiest Way to Run LLMs Locally: Meet Docker Model Runner”. In this session, I’ll look at:
- Why run LLMs locally? The benefits in terms of cost, privacy, latency, and control
- How Docker Model Runner simplifies things — containerizing large models, managing dependencies, and lowering friction
- Demo and walkthrough — showing you step by step how to get a model up and running on your own machine or server
- Best practices, pitfalls, and tips — what I’ve learned building and deploying these systems
- Q&A / hands-on help — to get you started with your own setup
My goal is that attendees leave with a concrete, reproducible process they can apply right away.
Why It Matters
Large language models (LLMs) are powerful, but running them locally has often felt out of reach for smaller teams, indie devs, or people in resource-constrained environments. With the right tooling (like Docker Model Runner), we can lower that barrier—unlocking more experimentation, more privacy, and more control over where and how inference happens.
I believe this aligns well with the mission of BITCON: elevating voices, demystifying advanced tech, and making it accessible. I hope this talk helps bridge a gap for folks who want to explore AI locally without getting lost in infrastructure.
I am excited to be speaking at BITCON again. To learn more about my session check it out here:
BITCon Session: The Easiest Way to Run LLMs Locally: Meet Docker Model Runner
BITCON is free! Be sure to register today: HERE



