In this part:
> ESPHome & custom component
> RF433 receiver & transmitter
> Hassio custom addon
In this part:
> ESPHome & custom component
> RF433 receiver & transmitter
> Hassio custom addon
Building My Smart Home - Part 1: Plan, Idea & Home Assistant
Check it out!
Building My Smart Home - Part 1: Plan, Idea & Home Assistant
Check it out!
GGUF support for Gemma 270M right from day-0
GGUF support for Gemma 270M right from day-0
Comes in 2 flavors: E2B and E4B (E means "effective/active parameters")
Comes in 2 flavors: E2B and E4B (E means "effective/active parameters")
📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
👉 Register here: lu.ma/vqx423ct
📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
👉 Register here: lu.ma/vqx423ct
📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
📅 When: June 15, 2025
👉 Register now: lu.ma/vqx423ct
📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
📅 When: June 15, 2025
👉 Register now: lu.ma/vqx423ct
Try it now: endpoints.huggingface.co/catalog
Try it now: endpoints.huggingface.co/catalog
All running locally on a Macbook M3
All running locally on a Macbook M3
Still can't match the power of that Casio FX 😆
Still can't match the power of that Casio FX 😆
Traditionally, models with complicated chat template like MiniCPM-V or Gemma 3 requires a dedicated binary to run.
Now, you can use all supported models via a "llama-mtmd-cli" 🔥
(Only Qwen2VL is not yet supported)
Traditionally, models with complicated chat template like MiniCPM-V or Gemma 3 requires a dedicated binary to run.
Now, you can use all supported models via a "llama-mtmd-cli" 🔥
(Only Qwen2VL is not yet supported)
ggml-easy is a header-only wrapper for GGML, simplifies development with a cleaner API, easy debugging utilities, and native safetensors loading ✨ Great for rapid prototyping!
ggml-easy is a header-only wrapper for GGML, simplifies development with a cleaner API, easy debugging utilities, and native safetensors loading ✨ Great for rapid prototyping!
And if you don't know, it's available in "Starter apps" section on AI Studio. The app is called "Gemini 95"
And if you don't know, it's available in "Starter apps" section on AI Studio. The app is called "Gemini 95"
Just use your good old human brain 🧠 😎
Check out my 3‑step estimation 🚀
Just use your good old human brain 🧠 😎
Check out my 3‑step estimation 🚀
Joke aside, 1B model quantized to Q4 without performance degrading is sweet 🤏
Joke aside, 1B model quantized to Q4 without performance degrading is sweet 🤏
Why? Because this allow me to do experiments faster, especially with models outside of llama.cpp 😆
Why? Because this allow me to do experiments faster, especially with models outside of llama.cpp 😆
My main talk will last for an hour to deep dive into the current state of on-device LLMs, exploring their advantages, trade-offs, and limitations.
The session will end with an Q&A, where you can ask me anything about this subject.
My main talk will last for an hour to deep dive into the current state of on-device LLMs, exploring their advantages, trade-offs, and limitations.
The session will end with an Q&A, where you can ask me anything about this subject.
🚀 The integration of vision models into llama.cpp
🚀 The challenges of maintaining a smooth UX/DX
🚀 The exciting future of llama.cpp
Big things ahead - stay tuned!
🚀 The integration of vision models into llama.cpp
🚀 The challenges of maintaining a smooth UX/DX
🚀 The exciting future of llama.cpp
Big things ahead - stay tuned!
There is a playground for that! More in 🧵
There is a playground for that! More in 🧵
👉 4 model sizes: 1B, 4B, 12B, 27B
👉 Vision capability (except for 1B) with bi-direction attention
👉 Context size: 32k (1B) and 128k (4B, 12B, 27B)
👉 +140 languages support (except for 1B)
👉 Day-zero support on many frameworks 🚀
👉 4 model sizes: 1B, 4B, 12B, 27B
👉 Vision capability (except for 1B) with bi-direction attention
👉 Context size: 32k (1B) and 128k (4B, 12B, 27B)
👉 +140 languages support (except for 1B)
👉 Day-zero support on many frameworks 🚀
👉 Comes in 2 sizes, 8B and 32B
👉 Supports 32 languages
👉 Day-zero support with HF Transformers
👉 Comes in 2 sizes, 8B and 32B
👉 Supports 32 languages
👉 Day-zero support with HF Transformers
This offers an alternative way to absorb extensive and intricate articles 🔍
This offers an alternative way to absorb extensive and intricate articles 🔍