(this might be useful when installing many codebases locally, (without some form of isolation), each with different torch+cuda req.):
Why I'm using all of GPT 5.2 Thinking/Pro, Claude Opus 4.5, Gemini 3, and occasionally even Grok. How my usage has changed over time. The meta for getting the most out of AI in 2026.
www.interconnects.ai/p/use-multip...
Why I'm using all of GPT 5.2 Thinking/Pro, Claude Opus 4.5, Gemini 3, and occasionally even Grok. How my usage has changed over time. The meta for getting the most out of AI in 2026.
www.interconnects.ai/p/use-multip...
claim: 200,000 garments from ten different clients in the past 3 months
www.dyna.co/blog/monster...
claim: 200,000 garments from ten different clients in the past 3 months
www.dyna.co/blog/monster...
This is a custom transport-triggered, 32 bit processor that will be fabricated by GlobalFoundries on their 180nm process (via of wafer.space)
I'll make a video at some point, but some high level details here:
This is a custom transport-triggered, 32 bit processor that will be fabricated by GlobalFoundries on their 180nm process (via of wafer.space)
I'll make a video at some point, but some high level details here:
www.youtube.com/watch?v=aR20...
www.youtube.com/watch?v=aR20...
marble.worldlabs.ai
marble.worldlabs.ai