chris
banner
chrisecramer.bsky.social
chris
@chrisecramer.bsky.social
Sr Director of AI and ML / 30+ years with ML and computer vision. Beekeeper, home brewer, and general fermentation fan. LLM skeptic
I wouldn't trust the (any?) number from Sam Altman and it almost certainly isn't an apples to apples comparison with the Netflix analysis
November 29, 2025 at 1:56 PM
I'll note that .34 wh works out to be a 250 w GPU running for 5 seconds (almost exactly). This sounds low for ChatGPT and also ignores the external costs of transmission, the receiving computer, etc that are rolled up into the Netflix analysis
November 29, 2025 at 1:54 PM
Maybe I've lost the point, but this is embarrassingly wrong. It's spitting out a 10x10 grid with 9 numbers between 1-10 (no 9 on the top row and no 10 on the first column) and the math is obviously wrong. What is this supposed to prove?
November 19, 2025 at 11:30 AM
I'm pretty sure you can drag the new window back into the old window and the new window goes away and becoemes a tab again. It's definitely undoable. Now having way to turn off the feature for those who don't want it makes sense :-)
November 14, 2025 at 5:14 PM
So I guess from a technical standpoint, I would be cautious. And that doesn't even get into the jobs aspect
October 7, 2025 at 5:52 PM
I've heard of other similar issues in translation software using genAI. IIRC, there was a recent story about K-pop demon hunters using ChatGPT to help with songs, but it turned out to be an artifact of the translation software
October 7, 2025 at 5:52 PM
For example, the medical transcription tool whisper (from OpenAI) has been shown to invent entire conversations, in part bc of weird context attention, and in part bc once it gets one thing wrong, then the whole transcription can go off the rails.
October 7, 2025 at 5:52 PM
The other form of AI translation is based on generative AI, think ChatGPT. These are predictive models that try to predict the next word based on what's come before and the current input. The problem with them is that they are less grounded in the original
October 7, 2025 at 5:52 PM
Google translate is based on a sequence to sequence model that tries to map lang A to lang B. It is fairly accurate and grounded in the meaning of the original text. There are also seq2seq models for spoken to written text.
October 7, 2025 at 5:52 PM
It sounds like the goal is have spoken language translated into another spoken language? Or into another written language?

From a technical standpoint, there are two approaches that are broadly labeled as AI.
October 7, 2025 at 5:52 PM
Agreed. And several of the examples provided (boiler plate code, website templates, etc) are probably better accomplished with an actual code template.
September 29, 2025 at 8:19 PM
Reposted by chris
It's like glitter...it's been hundreds of years and we're STILL finding Vikings everywhere 😆
September 23, 2025 at 7:53 AM
And this is why you're the brains of the family - or at least the memory 😁
September 15, 2025 at 11:08 PM
You should! They're great
September 15, 2025 at 1:22 AM
Near there - this was just up the trail from Rainbow Falls in Gorges State Park. It was July 4th week and we were avoiding DuPont 😁
September 15, 2025 at 1:18 AM
As I understand it, yes. It could let the cultivar root and negate the benefits of the rootstock. But it's very possible there's something I don't know :-)
September 13, 2025 at 12:55 AM
I mean, to be fair, I would love to spend a few hours with Hedy Lamarr, discussing spread spectrum communication systems. I'm not sure that constitutes a date, but...
September 11, 2025 at 11:35 PM
Nice explanation!
September 2, 2025 at 12:41 AM
Yeah - a trillion different possibilities is amazing... and now I want miso soup 😀
September 2, 2025 at 12:02 AM
Which is a lot of misos
September 1, 2025 at 11:58 PM
So, my result:

G1: 10
G2: 171
G3: 13
G4: 102091
G5: 56
G6: 8

Total: 1,016,728,352,640
September 1, 2025 at 11:57 PM