folks on reddit fine-tuning deepseek OCR for persian www.reddit.com/r/LocalLLM/c..., seems not to be an out-of-the-box functional type thing though
November 7, 2025 at 3:09 AM
Everybody can reply
1 likes
#ai #llm #localllama #localllm #grownostr #nostr #gfy
Not your own locally hosted LLM? You are giving away your thoughts and ideas to corporations/governments, who pay for what and how you think.
Not your own locally hosted LLM? You are giving away your thoughts and ideas to corporations/governments, who pay for what and how you think.
November 6, 2025 at 10:53 PM
Everybody can reply
Was expecting more of an unboxing experience from the #Nvidia RTX PRO 6000 Blackwell Max-Q, considering the price. But really, I'm just interested in the 96GB VRAM! Unfortunately, I'll need one more #AI/#ML client before I can afford to throw a second one in my new #Lenovo P8. #LocalLLM #GameDev
October 27, 2025 at 6:54 PM
Everybody can reply
1 reposts
7 likes
Run Google's AI Locally: A Beginner's Guide to Running Gemini AI (Gemma) on Windows free forever!
Master zero-trust AI. Run Google’s Gemma LLM locally on Windows for ultimate data privacy and control. A complete guide to setting up your private AI
www.logeshwaran.org
October 26, 2025 at 11:49 AM
Everybody can reply
Here is the generated social media post:
Local LLM update! We've implemented OLLAMA 3.1 locally, reducing API reliance & costs. More control over data, less expenses. This means better performance and lower bills for you! #localLLM #costsavings
Local LLM update! We've implemented OLLAMA 3.1 locally, reducing API reliance & costs. More control over data, less expenses. This means better performance and lower bills for you! #localLLM #costsavings
October 24, 2025 at 4:20 PM
Everybody can reply
Unlocking Edge AI: How A Hybrid Data Architecture Can Power Local LLM Deployments
www.linkedin.com/pulse/unlock...
#AI #EdgeAI #EnterpriseAI #LocalLLM #Ollama #OpenWebUI #LocalInference #UnifiedDataManagement #UnstructuredData #UDM
www.linkedin.com/pulse/unlock...
#AI #EdgeAI #EnterpriseAI #LocalLLM #Ollama #OpenWebUI #LocalInference #UnifiedDataManagement #UnstructuredData #UDM
Unlocking Edge AI: How A Hybrid Data Architecture Can Power Local LLM Deployments
One of the great things about a hybrid unstructured data architecture is that data can be pinned to the edge in a particular location, even when people are working on that dataset on their edge nodes ...
www.linkedin.com
October 24, 2025 at 12:14 PM
Everybody can reply
We also developed a new R package, localLLM (cran.r-project.org/package=loca...), that enables reproducible annotation using LLM directly in R. More functionalities to follow!
localLLM: Running Local LLMs with 'llama.cpp' Backend
The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runt...
CRAN.R-project.org
October 20, 2025 at 1:57 PM
Everybody can reply
2 likes
We also developed a new R package, localLLM (cran.r-project.org/package=loca...), that enables reproducible annotation using LLM directly in R. More functionalities to follow!
localLLM: Running Local LLMs with 'llama.cpp' Backend
The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runt...
CRAN.R-project.org
October 20, 2025 at 1:53 PM
Everybody can reply
New on CRAN: localLLM (1.0.1). View at https://CRAN.R-project.org/package=localLLM
localLLM: Running Local LLMs with 'llama.cpp' Backend
The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.
CRAN.R-project.org
October 15, 2025 at 9:21 PM
Everybody can reply
New CRAN package localLLM with initial version 1.0.1
#rstats
https://cran.r-project.org/package=localLLM
#rstats
https://cran.r-project.org/package=localLLM
CRAN: Package localLLM
The 'localLLM' package provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled with the package. Package features include text generation, reproducible generation, and parallel inference.
cran.r-project.org
October 15, 2025 at 8:02 PM
Everybody can reply
Beyond aggregation, users want enhanced search and integration with local LLMs for personalized insights. Imagine asking your timeline questions and getting smart, private answers based on *your* data. The future is personal AI. #LocalLLM 5/6
October 8, 2025 at 4:00 PM
Everybody can reply
The Local AI Problem Nobody Is Talking About
BudgetEngineer published a post on Ko-fi
ko-fi.com
October 5, 2025 at 7:30 PM
Everybody can reply
1 likes
Looking to use Open Source AI with your spreadsheets? (Un)Perplexed Spready works with Ollama models. Cut license cost, keep control.
🔗 matasoft.hr/qtrendcontro...
#OpenSourceAI #Ollama #LocalLLM #AI
🔗 matasoft.hr/qtrendcontro...
#OpenSourceAI #Ollama #LocalLLM #AI
Matasoft's AI-Driven Spreadsheet Processing Services and Software
Transform your business data workflows with Matasoft’s AI-driven spreadsheet processing services and software. (Un)Perplexed Spready, powered by Perplexity AI, automates data extraction, categorizatio...
matasoft.hr
October 3, 2025 at 9:46 PM
Everybody can reply
anyways long story short lol, local models aren’t there yet but DAMN they’re progressing so damn fast. and especially if you have access to an apple silicon pc with a good amount of ram, id use that as a localllm server. im trying to get a used 32gb m4 mac mini in order to achieve that
September 28, 2025 at 4:34 PM
Everybody can reply
1 likes
8/ Interested In Open Source AI?
Bookmark my account (a.k.a, "follow me") or keep an eye on my posts.
I'm writing a book series on offline AI (a.k.a, local AI; relevant: #LocalLLM) in which I introduce an open source tool that combines lessons learned with rigor into a simplified user experience.
Bookmark my account (a.k.a, "follow me") or keep an eye on my posts.
I'm writing a book series on offline AI (a.k.a, local AI; relevant: #LocalLLM) in which I introduce an open source tool that combines lessons learned with rigor into a simplified user experience.
September 24, 2025 at 4:48 AM
Everybody can reply
2 likes
itsfoss.com/android-on-d...
Running Local LLMs on an android, because: reasons.
Some of the apps discussed:
- MLC Chat
- SmolChat
- Google AI Edge Gallery
Also the specific LLM's tested:
- Google’s Gemma
- Meta’s Llama
- Microsoft’s Phi-3
- Qwen
- TinyLlama
#AI #LLM #LocalLLM #Android
Running Local LLMs on an android, because: reasons.
Some of the apps discussed:
- MLC Chat
- SmolChat
- Google AI Edge Gallery
Also the specific LLM's tested:
- Google’s Gemma
- Meta’s Llama
- Microsoft’s Phi-3
- Qwen
- TinyLlama
#AI #LLM #LocalLLM #Android
I Ran Local LLMs on My Android Phone
I love the idea of having an AI assistant that works exclusively for me, no monthly fees, no data leaks. I won’t lie, it’s not perfect. Here's my experience.
itsfoss.com
September 15, 2025 at 7:11 PM
Everybody can reply
RTX 3090s are the hobbyist LocalLLM sweet spot right now. I wonder how much this is affecting prices in the used market?
September 14, 2025 at 1:13 AM
Everybody can reply
What all in one AI desktop app are you using? I want something that works with local LLMs (Llama, Qwen, Mistral, SD, Flux, etc on my networked Linux machine) and online (ChatGPT, Gemini, Claude). I want to experiment, train, and keep some chats local.
What are you using?
#localLLM #opensource #ai
What are you using?
#localLLM #opensource #ai
September 13, 2025 at 5:59 PM
Everybody can reply
1 likes
Anyway, Qwen3-4B-Instruct-2507-Q8_0 (without "reasoning") is actually okay (for its size and relatively speaking). As per itself, it is not "a live verification system". # LLM # interview # genAI # localLLM # review # ontology
Interest | Match | Feed
Interest | Match | Feed
Origin
mastodon.social
September 10, 2025 at 6:01 PM
Everybody can reply
ローカルLLMの歴史を学んで可能性を考える https:// qiita.com/yo_arai/items/f10fc5 52200f4abfe9d5?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items # qiita # OpenAI # 生成AI # LLM # LocalLLM
Interest | Match | Feed
Interest | Match | Feed
Origin
rss-mstdn.studiofreesia.com
September 3, 2025 at 12:41 PM
Everybody can reply
1 likes
Bring your own brain? Why local LLMs are taking off www.theregister.com/2025/08/31/l... by Danny Bradbury
#localLLM #genAI #dataretention #techsovereignty #techpolicy
#localLLM #genAI #dataretention #techsovereignty #techpolicy
September 2, 2025 at 5:44 PM
Everybody can reply
1 reposts
5 likes
The best place to follow progress and contribute is here: www.reddit.com/r/LocalLLM/
www.reddit.com
August 30, 2025 at 10:42 PM
Everybody can reply
Local LLMs with openAI-compliant tool-calling course
9 chapters. 3 weeks. (8/11/-8/29)
Chapter 9: Interactive UI - Visualizing Model Responses
github.com/jrbrowning/o...
#LocalLLM #OpenSource #ToolCalling #OllamaToolsAPI #OAIToolsUI #FastAPI
9 chapters. 3 weeks. (8/11/-8/29)
Chapter 9: Interactive UI - Visualizing Model Responses
github.com/jrbrowning/o...
#LocalLLM #OpenSource #ToolCalling #OllamaToolsAPI #OAIToolsUI #FastAPI
August 29, 2025 at 9:48 PM
Everybody can reply
2 likes
🦙📱 Build a mobile app in Blazor for your Local LLM! No cloud. No subscriptions. Just code.
🎥 Tutorial → youtu.be/5wWswrwYkUo
#Blazor #AI #LocalLLM #DotNet
🎥 Tutorial → youtu.be/5wWswrwYkUo
#Blazor #AI #LocalLLM #DotNet
Build Mobile App for your Local LLM in Blazor
YouTube video by Hassan Habib
youtu.be
August 29, 2025 at 2:46 PM
Everybody can reply