https://eagleusb.github.io
܉Platform x Senior SRE x MLOps, one system at the time
www.brailleinstitute.org/freefont/
www.brailleinstitute.org/freefont/
Still so refreshing to be around during the enshittification of the Internet
Still so refreshing to be around during the enshittification of the Internet
```sh
❯ npm config set prefix "~/.local"
❯ npm config get prefix
/home/p00/.local
❯ npm i -g @google/gemini-cli
added 587 packages in 19s
❯ type -a gemini
gemini est /home/p00/.local/bin/gemini
```
Thanks me later.
```sh
❯ npm config set prefix "~/.local"
❯ npm config get prefix
/home/p00/.local
❯ npm i -g @google/gemini-cli
added 587 packages in 19s
❯ type -a gemini
gemini est /home/p00/.local/bin/gemini
```
Thanks me later.
All forward and reverse IPs, all CNAMES and all subdomains of every domain. For free.
Updated monthly.
Try: curl ip.thc.org/1.1.1.1
Raw data (187GB): ip.thc.org/docs/bulk-da...
(The fine work of messede 👌)
All forward and reverse IPs, all CNAMES and all subdomains of every domain. For free.
Updated monthly.
Try: curl ip.thc.org/1.1.1.1
Raw data (187GB): ip.thc.org/docs/bulk-da...
(The fine work of messede 👌)
semble.so/url?id=https...
#llm #ai
semble.so/url?id=https...
#llm #ai
- GLM benchmarks are good, but not as great as Gemini Pro 2 or GPT 5.2
- BUT the cost per M tokens is a LOT less
- OpenAI API compatible
- cost less than **30$** for a year
- auto-renewal disabled day 1
- GLM benchmarks are good, but not as great as Gemini Pro 2 or GPT 5.2
- BUT the cost per M tokens is a LOT less
- OpenAI API compatible
- cost less than **30$** for a year
- auto-renewal disabled day 1
In 2025 it's hard to find a piece of Internet which is not cannibalized by corporation, marketing or bots.
In 2025 it's hard to find a piece of Internet which is not cannibalized by corporation, marketing or bots.
- 23 Mamba-2 and MoE layers, along with 6 Attention layers
- each MoE layer includes 128 experts plus 1 shared expert, with 5 experts activated per token
- 3.5B active parameters
huggingface.co/unsloth/Nemo...
#llm #ai
- 23 Mamba-2 and MoE layers, along with 6 Attention layers
- each MoE layer includes 128 experts plus 1 shared expert, with 5 experts activated per token
- 3.5B active parameters
huggingface.co/unsloth/Nemo...
#llm #ai
huggingface.co/blog/ggml-or...
#llm #llama #ai
huggingface.co/blog/ggml-or...
#llm #llama #ai
go.eagleusb.com/semble/3m7nz...
go.eagleusb.com/semble/3m7nz...
(forked at huggingface.co/xeophon/NVID..., will be eventually shutdown for sure)
(forked at huggingface.co/xeophon/NVID..., will be eventually shutdown for sure)
Propulse par :
- Mehdi Medjaoui
- JB Kempf
- Steeve Morin
Episode #1 pas mal du tout, cote podcast tech francophone
Propulse par :
- Mehdi Medjaoui
- JB Kempf
- Steeve Morin
Episode #1 pas mal du tout, cote podcast tech francophone
👉 https://aaif.io/
👉 https://aaif.io/
- Devstral 2 (123b)
- Devstral 2 small (24b)
mistral.ai/news/devstra...
- Devstral 2 (123b)
- Devstral 2 small (24b)
mistral.ai/news/devstra...
Feature: the ngx_http_proxy_module supports HTTP/2.
nginx.org/en/CHANGES
Feature: the ngx_http_proxy_module supports HTTP/2.
nginx.org/en/CHANGES
github.com/neurosnap/zmx
#linux #terminal #zig
github.com/neurosnap/zmx
#linux #terminal #zig