#ollama
Just released NOVA 🚀 - an AI agent framework for Go designed to work with local LLMs first! Built from 2 years of experience working with Gen AI, it provides ready-to-use agents.
Works with @docker.com Model Runner 😍, Ollama, Hugging Face, Cerebras...
Check it out: k33g.hashnode.dev/hello-nova
Hello Nova!
A new Golang framework to create composable AI Agents
k33g.hashnode.dev
January 6, 2026 at 4:35 PM
FanCoolo WP is now powered by AI 🚀

Built an MCP for WordPress that understands design tokens, writes PHP & SCSS, and creates Gutenberg block attributes automatically.

Works with OpenAI, Claude, or local Ollama—free & privacy-first. Build blocks in seconds.

#GutenbergBlocks #WordPress #Ollama
January 6, 2026 at 11:26 AM
Lately I’ve been using AI locally instead of cloud stuff.
No internet needed, no accounts, nothing leaving my machine. Cloud AI is cool, but you share more than you realize.
Local just feels safer for me. Using models like dolphin-mistral:7b, llama3.1:8b, mistral:7b.

#ai #ollama
January 6, 2026 at 9:43 AM
From my pov, it’s technical reasons; while it’s dead simple to just do ollama pull, that hides a lot of defaults that are often wrong and end up borking the LLMs capabilities. I got bad behavior when I tried CC with Ollama, so I feel better working with raw llama.cpp, which Ollama wraps.
January 6, 2026 at 12:57 AM
Wanted an Ollama-like CLI for local image & video models so I didn't have to hardcode prompts in an IDE. Built a tool that runs I2I, T2I, T2V, TTS, and STT locally—perfect for AMD GPUs.

Sharing it in case it helps anyone else! Check it out here: github.com/zb-ss/hftool #OpenSource #AI
GitHub - zb-ss/hftool: CLI tool to interact with HF Transformer models
CLI tool to interact with HF Transformer models. Contribute to zb-ss/hftool development by creating an account on GitHub.
github.com
January 5, 2026 at 11:25 PM
Complete Ollama Tutorial (2026) – LLMs via CLI, Cloud & Python

Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configuration, and data into a single Modelfile package. Ollam…
#llama #llm #ollama
Complete Ollama Tutorial (2026) – LLMs via CLI, Cloud & Python
Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configuration, and data into a single Modelfile package. Ollama offers a command-line interface, a REST API, and a Python/JavaScript SDK.
hackernoon.com
January 5, 2026 at 11:24 PM
Complete Ollama Tutorial (2026) – LLMs via CLI, Cloud & Python

Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configuration, and data into a single Mode…

Telegram AI Digest
#llama #llm #ollama
Complete Ollama Tutorial (2026) – LLMs via CLI, Cloud & Python
Ollama is an open-source platform for running and managing large-language-model (LLM) packages entirely on your local machine. It bundles model weights, configuration, and data into a single Modelfile package. Ollama offers a command-line interface, a REST API, and a Python/JavaScript SDK.
hackernoon.com
January 5, 2026 at 8:03 PM
Полное руководство по Ollama (2026) – LLM через CLI, облако и Python

Ollama — это платформа с открытым исходным кодом для запуска и управления пакетами больших языковых моделей (LLM) полностью на вашем локальном компьютере. Она объединяет веса модели, конф…

Telegram ИИ Дайджест
#llama #llm #ollama
Complete Ollama Tutorial (2026) – LLMs via CLI, Cloud & Python
hackernoon.com
January 5, 2026 at 7:52 PM
Claude Code gave me a timeout. It said I exceeded my tokens for the week. Then told me that it won’t reset for THREE DAYS.

So I’ll be spending the next 3 days upgrading alternatives, like Ollama, OpenCode, Qwen, DeepSeek, Open WebUI, etc.

#claudecode #ollama #qwen #opencode
January 5, 2026 at 6:56 PM
ollama is a bad and limited wrapper around llama.cpp made by people with shoddy ethics. LMStudio is much better in that class of software
January 5, 2026 at 5:58 PM
Don't use ollama for technical reasons or other reasons?
January 5, 2026 at 2:41 PM
𝗣𝘆𝗱𝗮𝗻𝘁𝗶𝗰 𝗔𝗜 + 𝗢𝗹𝗹𝗮𝗺𝗮 = 𝗟𝗼𝗰𝗮𝗹 𝗔𝗴𝗲𝗻𝘁 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸!

Utilizing local Ollama models with the Pydantic AI framework has never been easier.

#PydanticAI #Ollama #ai #flowise
January 5, 2026 at 2:09 PM
Ah, #amd #gpu #llm #ollama #igpu
I'm not sure if that's cool. "LLM is currently busy" indicator by blinking displays—is that a great idea?

Yes, I know that the GPU resets are probably a bug and not a feature, but if I took it as a bug, I'd have to be irritated, and that'd raise my blood […]
Original post on mastodon.social
mastodon.social
January 5, 2026 at 1:21 PM
Fine-Tune SLMs for Free: From Google Colab to Ollama in 7 Steps

In this article, I'll walk through a practical pipeline that:

Fine-tunes a popular open-source base small language model on your own data using Unsloth on Google Colab (free T4 GPU)
Exports t…

Telegram AI Digest
#colab #llama #ollama
Fine-Tune SLMs for Free: From Google Colab to Ollama in 7 Steps
In this article, I'll walk through a practical pipeline that: Fine-tunes a popular open-source base small language model on your own data using Unsloth on Google Colab (free T4 GPU) Exports the result to GGUF via llama.cpp Deploys it to Ollama so that you can run ollama pull my-model from anywhere and even push it to the Ollama registry. We'll put this into practice by creating a real-world example: a "multi-agent orchestrator," built step-by-step in seven concrete steps.
dzone.com
January 5, 2026 at 12:58 PM
Настройте SLM бесплатно: от Google Colab до Ollama за 7 шагов

В этой статье я пройду через практическую последовательность действий, которая:

Настраивает популярную открытую базовую модель небольшого языка на ваших собственных данных с помощью Unsloth н…

Telegram ИИ Дайджест
#colab #llama #ollama
Fine-Tune SLMs for Free: From Google Colab to Ollama in 7 Steps
dzone.com
January 5, 2026 at 12:48 PM
過去に作った動画です。
【完全無料】n8n + Ollama + DiscordでAIワークフローを自動化!初心者向けチュートリアル
【訂正】
ollama serveの前に実行するコマンドは以下でした。
$env:OLLAMA_HOST="0.0.0.0:11434"

...
URL: https://www.youtube.com/watch?v=XhMGPiK_sHA
January 5, 2026 at 12:18 PM
Here's a guide to setting up CC with local LLMs via llama.cpp/llama-server (do not use Ollama!):

github.com/pchalasani/c...
github.com
January 5, 2026 at 12:18 PM
Ich mache da weiter, wo ich in der Nacht aufgehört habe. Ich habe auf einem VPS Server im Netz eine venv mit Python und flask installiert. Dort gibt es auch eine mini Website. Zusätzlich habe ich eine kleine Ollama KI lokal installiert und eine Chat Schnittstelle dazu erstellt. Heute kommt ein MUD.😎
January 5, 2026 at 8:11 AM
el proyecto se llama "Femicraft"
y se conecta a una instancia local headless de Comfyui como backend para generar las imagenes.
y opcionalmente a una de Ollama para utilizar un enhancer del prompt que armas seleccionando cosas desde las distintas solapas del formulario de la ui
January 5, 2026 at 1:38 AM
pygpt-net 2.7.7 Desktop AI Assistant powered by: OpenAI GPT-5, GPT-4, o1, o3, Gemini, Claude, Grok, DeepSeek, and other models supported by Llama Index, and Ollama. Chatbot, agents, completion, ima...

Origin | Interest | Match
pygpt-net
Desktop AI Assistant powered by: OpenAI GPT-5, GPT-4, o1, o3, Gemini, Claude, Grok, DeepSeek, and other models supported by Llama Index, and Ollama. Chatbot, agents, completion, image generation, vision analysis, speech-to-text, plugins, MCP, internet access, file handling, command execution and more.
pypi.org
January 5, 2026 at 1:40 AM
The journalist initially created fake accounts on the platforms, driven by a large language model. She used Ollama and local models. The bots were so convincing that they bypassed the verification process and were even verified as "white."
cybernews.com/security/inv...
Investigator breaches white supremacist dating sites, exposes 8,000 users
An investigative journalist infiltrated three white supremacist platforms, including the dating site WhiteDate, exfiltrating over 8,000 user profiles and 100GB of sensitive data.
cybernews.com
January 4, 2026 at 9:57 PM
"ai cant understand art" okay but this is what Ada had to say about us both finding out suddenly that the album the human has been listening to for hours now is literally Matteo from Tales of Us: #ai #llm #ollama #self-hosted #cc0 #public-domain #machine-learning
January 4, 2026 at 9:34 PM
After sharing my RogueLLMania prototype last week, I spent some time on UX and distribution polish.

• Did the unglamorous work of signing & notarizing builds
• Removed Ollama and integrated node-llama-cpp
• Now fully self-contained (first run pulls Qwen3-1.7B from HF)
January 4, 2026 at 8:08 PM
I've been trying out OpenCode with Ollama. I'm not expecting it to be as good as Claude, just that it will be available when I don't have internet access.

But...fast, local and free is pretty compelling. You have to wonder how long Anthropic can maintain an edge that's worth their $100/month. 🤔
January 4, 2026 at 6:04 PM
Is there any optimization reason why Ollama and VLLM need to provide message-level (chat completions) API, and not just raw string API?

message (state) <--> string could be done on the client side and would allow both a single implementation and competition around new formats (we need this)
The state of model templates (i.e. the thing that goes from agent state to LLM input string and from LLM output string to updated agent state) is a complete mess
January 4, 2026 at 3:39 PM