Xuan Son Nguyen
@ngxson.hf.co
770 followers 140 following 79 posts
Software Engineer @ Hugging Face 🤗
Posts Media Videos Starter Packs
Very nice touch, Gmail 😅
Part 2 of my journey building a smart home! 🚀

In this part:
> ESPHome & custom component
> RF433 receiver & transmitter
> Hassio custom addon
Just published a new article on my blog 🏃‍♂️

Building My Smart Home - Part 1: Plan, Idea & Home Assistant

Check it out!
Kudos to Google and the llama.cpp team! 🤝

GGUF support for Gemma 270M right from day-0
Richy Mini and SmolLM3 are featured in Github's weekly news! 🚀 🚀
Gemma 3n has arrived in llama.cpp 👨‍🍳 🍰

Comes in 2 flavors: E2B and E4B (E means "effective/active parameters")
See you this Sunday at AI Plumbers conference: 2nd edition!

📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
👉 Register here: lu.ma/vqx423ct
✨✨ AIFoundry is bringing you the AI Plumbers Conference: 2nd edition — an open source meetup for low-level AI builders to dive deep into "the plumbing" of modern AI

📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
📅 When: June 15, 2025
👉 Register now: lu.ma/vqx423ct
Hugging Face Inference Endpoints now officially support deploying **vision** models via llama.cpp 👀 👀

Try it now: endpoints.huggingface.co/catalog
Real-time webcam demo with @huggingface.bsky.social SmolVLM and llama.cpp server.

All running locally on a Macbook M3
Although we have A100, H200, M3 Ultra, etc

Still can't match the power of that Casio FX 😆
llama.cpp vision support just got much better! 🚀

Traditionally, models with complicated chat template like MiniCPM-V or Gemma 3 requires a dedicated binary to run.

Now, you can use all supported models via a "llama-mtmd-cli" 🔥

(Only Qwen2VL is not yet supported)
Finally have time to write a blog post about ggml-easy! 😂

ggml-easy is a header-only wrapper for GGML, simplifies development with a cleaner API, easy debugging utilities, and native safetensors loading ✨ Great for rapid prototyping!
Someone at Google definitely had a lot of fun making this 😆

And if you don't know, it's available in "Starter apps" section on AI Studio. The app is called "Gemini 95"
Telling LLM memory requirement WITHOUT a calculator?

Just use your good old human brain 🧠 😎

Check out my 3‑step estimation 🚀
Google having a quite good sense of humor 😂

Joke aside, 1B model quantized to Q4 without performance degrading is sweet 🤏
Cooking a fun thing today, I can now load safetensors file directly to GGML without having to convert it to GGUF!

Why? Because this allow me to do experiments faster, especially with models outside of llama.cpp 😆
No vibe coding. Just code it ✅

Visit my website --> ngxson.com