Cedric Clyburn
@cedricclyburn.com
190 followers 290 following 69 posts
Developer Advocate at Red Hat • Organizer KCD New York • Previously at MongoDB • containers, k8s, & everything in between
Posts Media Videos Starter Packs
Pinned
cedricclyburn.com
What a time I had at @devoxx.uk this week, the energy was fantastic and it’s an honor to have attended and spoke at #DevoxxUK for the first time! Thanks to the Red Hat team with @kevindubois.com @danieloh30.bsky.social @myfear.com and many other rockstars for making this happen! 🚀 Slides below :)
cedricclyburn.com
Had one of my most fun sessions yet with @allthingsopen.bsky.social on Local LLM’s, showing how developers can setup their own private RAG, Vibe Coding, and Agentic AI apps (thanks to #MCP) without needing 3rd party models! Ft @podmanio.bsky.social + #vLLM

💻 Slides: red.ht/local-llm
Reposted by Cedric Clyburn
jfokus.se
⚙️ Deep Dive Monday at #Jfokus — Building with Open Source AI: A Crash Course with @cedricclyburn.com & Legare Kerrisson from Red Hat.

Learn how open models, Linux & Kubernetes empower you to forge your own AI path. 🔥

www.jfokus.se

#DeveloperConference #AI #OpenSource #RedHat #Kubernetes #LLMs
Reposted by Cedric Clyburn
cloudnativeboy.bsky.social
Thanks to @cedricclyburn.com & Christopher Nuland, I'm pretty much impressed with llm-d.ai: a Kubernetes-native distributed inference stack that brings cache-aware routing and disaggregated serving to LLMs.

Watch the full episode to know more: youtu.be/2Wtug1kTwUk
cedricclyburn.com
It was a pleasure to join you mate!
cedricclyburn.com
Back to my hometown in Raleigh, NC for @allthingsopen.bsky.social this week 🤩 it’ll be a busy time with my Local LLMs talk tomorrow (Developer Room 1, 3:15pm) but we’ve already started with a #WeLoveOpenSource chat and I’ll be at the #RedHat booth tomorrow doing fun live demos 💻
cedricclyburn.com
We had a BLAST this weekend in Boston for #DevConfUS and the #vLLM meetup talking all things open source and AI 🚀 from LLM compression, distributed model serving on Kubernetes, RAG/Agentic apps and so much more, the community is what drives things forward, and I’m so grateful!
cedricclyburn.com
OpenAI’s gpt-oss language model is a beast, and matches models like o3 and o4-mini on coding, tool use, and more 🤯 but how can you run it yourself with zero trust security, in containers + automatic GPU acceleration? The #Ramalama project has you covered: developers.redhat.com/articles/202...
How to run OpenAI's gpt-oss models locally with RamaLama | Red Hat Developer
Learn to run and serve OpenAI's gpt-oss models locally with RamaLama, a CLI tool that automates secure, containerized deployment and GPU optimization
developers.redhat.com
Reposted by Cedric Clyburn
cedricclyburn.com
It’s widely known that LLM’s can be costly and difficult to scale, but in the past few weeks I’ve learned about #llm-d, an open source framework for distributed AI inference on Kubernetes & put together this simple explainer blog ⤵️

What is llm-d & Why Do We Need It: www.redhat.com/en/blog/what...
What is llm-d and why do we need it?
We're seeing a significant trend: more organizations are bringing their large language model (LLM) infrastructure in-house.
www.redhat.com
cedricclyburn.com
🃏 Test open-source AI in a game of Blackjack → blackjack-frontend-blackjack-ai-demo.apps.prod.rhoai.rh-aiservices-bu.com

On desktop, hit the top-right 🔬button to switch between a 1B parameter model on CPU and a 3B parameter model on GPU :)
Blackjack
Blackjack AI-Enhanced Game
blackjack-frontend-blackjack-ai-demo.apps.prod.rhoai.rh-aiservices-bu.com
cedricclyburn.com
We keep hearing about open-source AI like GPT-OSS, DeepSeek, Qwen, Llama 🦙 & more: but how do you actually test and scale them?

At #Ai4 in Vegas 🎰, we built a Blackjack coaching game to benchmark small language models on both CPU and GPU via #llama.cpp and #vLLM! I’ll link the slides and demo ⬇️
cedricclyburn.com
Can’t wait to be back in my hometown for ATO! 🤩
Reposted by Cedric Clyburn
allthingsopen.bsky.social
Cedric Clyburn (@cedricclyburn.bsky.social), Senior Developer Advocate for Red Hat, joins us on stage to share their insights. Don't miss "Everything You Need to Know About Running LLMs Locally" at #AllThingsOpen! 2025.allthingsopen.org/sessions/eve...
cedricclyburn.com
Couldn’t be happier after a busy few days in Berlin for @wearedevelopers.bsky.social World Congress, catching up with friends & hanging out at the #RedHat stage! I’ll link my own developer & AI engineer focused sessions below, featuring @podmanio.bsky.social #vLLM & Meta’s #LlamaStack!
cedricclyburn.com
Happy Monday folks! I’m inbound ✈️ to @devbcn.bsky.social where I’ll be speaking about my favorite open source projects @podmanio.bsky.social & @podman-desktop.io to show how simple it can be for developers to move from containers, to pods, to Kubernetes! Join me on July 9th @ 2:20 in Room 2 :)
cedricclyburn.com
Back to my roots! I’m here in the NY IBM Technology studio to record a new whiteboard on @podmanio.bsky.social + @podman-desktop.io and #BootableContainers! It’s always a fun time producing these but Bootc is a uniquely special subject :)
Reposted by Cedric Clyburn
dev2next.bsky.social
Why rent AI when you can run it? Learn how to deploy open-source LLMs locally, optimize performance & integrate with your data with @cedricclyburn.bsky.social at dev2next. Live demos included! 🚀

www.dev2next.com/speaker/e915...

🎟️ dev2next.com
💥 Code JOIN-CEDRICC-50OFF = discount
Reposted by Cedric Clyburn