Matt II Types
@edfinnerty.bsky.social
220 followers 65 following 140 posts
I'm a Byliner. No clearly identified human author/artist, no sale, no read, no view, no click. Pretty sure my meager follower count is inflated by bots and scammers. I do not maintain my posting history.
Posts Media Videos Starter Packs
Pinned
I feel like this is somehow risky to say, but it shouldn't be: I cannot possibly care deeply about every issue. I cannot possibly be informed about every issue. Nobody can. I'd rather be honest than performative. I'd rather read what those who do care have to say and keep my mouth shut.
But again, read Zitron, see the depth and the sourcing, and consider that some shouting is called for by all the mendacity and hand waving and lack of interrogation of any of the main gen code companies' claims by the broader tech media.
Emily Bender, Alex Hanna, Karen Hao, Gary Marcus, Molly White, Timnit Gebru, and many others.
And there are many others, all easy to find. Your claim that Zitron has no substance and implicit claim that substantial critics are hard to find are both not merely questionable, but absurdly off base. That's why you're getting the ratio that you are, not a lack of niceness.
Ed Zitron provides a ton of substance, especially regarding the business math of it all, which is a big deal. Baldur Bjarnason provides a lot as well. He wrote a whole book on problems with generative code.
It is. The reporters get bored and lazy. It's either that, or they've lost all objectivity, for reasons ranging from the utterly banal to the nefarious.
This will somehow be far *more* embarrassing than the Ashley Madison breach.
Getting angry at people who present actual facts is the MAGA playbook. It's so sad to see people who should know better acting that way. You're doing the right thing, and I'm glad you know it.
Just got my copy of @jacobsilverman.com's new book, "Gilded Rage." I will read it immediately. First thing I did when I opened the package was take off the dust jacket, which unfortunately has a partial photo of Musk on it. That jacket is headed for the back yard fire pit this evening.
I don't even bother with CNET any more. They're a promotional publication and nothing more.
It's now been 4 1/2 months since the policy was promised, and not even a timeline has been shared with the readers and listeners the trust of which CPM claims to value.
They should start pulling licenses for this shit.
If in fact this garbage ever sells in large numbers (which is highly questionable), I expect many places will be banning them at the door. One of those places will be my home.
It's irrelevant, bad faith garbage form a mental lightweight who has yet to present a single shred of evidence for any assertion. Goodbye.

Also, your last line shows you don't understand agentic code, either. It quite frequently does *not* behave as you describe.
If you're defending an LLM by saying it's a good as Fox News (it isn't - Fox News is great at intentionally lying), that's not the clever twist you think it is.
That's not the standard, and you know it. This is a bad faith conversation. I'm done wasting my time with you. I provide articles with data. I provide a screenshot of the LLM's own output. You respond with 7th grade argumentation.
It's hand-waving bullshit from a lightweight thinker acting in bad faith. It wasn't worth responding to.
If every source you use is like that, find better sources.
There's no contradiction. That's fucking stupid. Perfect isn't the standard. Extremely bad (yet confident-sounding) in a an architecturally baked in way is. The context is that Google's 'AI' summaries are horrible at providing honest answers, and makes up bullshit ones that it asserts confidently.
Are you suggesting that there is no such thing as a *good* source of truth (or that you have to have the concept explained to you), so as to imply that the LLM openly stating it's a bad one is a null value reality?
No, I'm saying it because of the facts. I've got 31 years or professional experience in technology, including understanding the transformer-based architecture at the code level. Nobody who understands that code can assert it is good at reliably delivering factual answers. It isn't. That's a fact.
Irrelevant. The context of the thread is getting factual responses to questions. It sucks at that. It says so.
LLMs specifically (that's the context) are a shitty source of truth. Horrible.
(one of the few things it gets right consistently)