it's a little slow for daily use. so i'm saving up for a RTX 4090 or farm of M4s
it's a little slow for daily use. so i'm saving up for a RTX 4090 or farm of M4s
here's just a little more about current window context sharing
here's just a little more about current window context sharing
classic meat-brain problems
but it works!
classic meat-brain problems
but it works!
Perplexity writes all my lua
and it knows *exactly* how to integrate neovim and avante with @lmstudio
Perplexity writes all my lua
and it knows *exactly* how to integrate neovim and avante with @lmstudio
LM Studio opens a local endpoint for requests with four "OpenAI-like" endpoints:
- /v1models
- /v1/chat/completions
- /v1/completions
- /v1/embeddings
we're using /chat/completions for our neovim + avante setup
LM Studio opens a local endpoint for requests with four "OpenAI-like" endpoints:
- /v1models
- /v1/chat/completions
- /v1/completions
- /v1/embeddings
we're using /chat/completions for our neovim + avante setup
running these models locally takes a TON of system resources
you may need to close a couple hundred Chrome tabs
running these models locally takes a TON of system resources
you may need to close a couple hundred Chrome tabs
Claude 3.5 Sonnet still trounces (even the open reasoning models) at summary RAG summarization
check out the result I get from the one-prompt YouTube description generator that Cursor created for me
Claude 3.5 Sonnet still trounces (even the open reasoning models) at summary RAG summarization
check out the result I get from the one-prompt YouTube description generator that Cursor created for me