Jeff Lockhart
@jwlockhart.bsky.social
2.3K followers 630 following 530 posts
Cat person. Sometimes sociologist of science, sex, & other stuff.
Posts Media Videos Starter Packs
jwlockhart.bsky.social
Broke: no Internet for children.
Bespoke: the internet is only for children.
jwlockhart.bsky.social
It should be an easy win. They have all the engineering talent, all the compute, all the data they need. There's plenty of open source guidance for it on HF, reddit, GitHub, elsewhere.

BUT, when grok rolled it out, their misunderstanding of their own LLM (and the users) made it a flop.
jwlockhart.bsky.social
💯 what I don't know is the extent to which the "young Republicans" network stretching up to 40 years old and including office holders enables (and I hate falling back on this) an 'extended adolescence'
jwlockhart.bsky.social
The real hot take would also point out the extent to which this is a longstanding age / gender / race story that crosses party and political lines.
jwlockhart.bsky.social
It was only a matter of time before the big players went after the chatbot erotica market. Will be fascinating to see if they can wrestle it away from character AI, replika, grok (lol), etc.
Screenshot of Sam Altman tweet promising future rollout of erotica on chatgpt
jwlockhart.bsky.social
Sort of beside the point, but text generators are still pretty poorly behaved classifiers. The world of solutions right now is mostly mysticism around prompting and a lot of unspoken postprocessing. Would love to see people attaching different output heads to the neural nets to make them classifiers
jwlockhart.bsky.social
That summer I read Baudrillard and was all in on it... I still couldn't have imagined back then how hyperreal we would become.
khandozo.bsky.social
omg, I found this recipe that showed a chicken with really crispy skin but the recipe called for it to be doused in broth while it was cooking and I was like "that won't work??" and then I saw this
Image of a generated fake white woman with a winning tradwife smile wearing puff sleeves and an apron. Text below reads "Lina is a virtual chef with a mission: to make cooking accessible, enjoyable, and time-efficient. Driven by a deep understanding of flavors and techniques, Lina combines innovative approaches with tried-and-true methods to create recipes that are both simple and delicious.Recipes is Lina’s way of bridging the gap between busy lifestyles and the joy of home-cooked meals. Here, you’ll find Lina’s unique take on classic dishes, quick twists on international cuisines, and creative hacks that make cooking a breeze.

Learn more"
jwlockhart.bsky.social
"this is a public conversation" insists the guy butting in somewhere he's not wanted.
jwlockhart.bsky.social
I agreed with your point. I said there's no need for you to be a jerk to a stranger in order to make it. If you want to be rude to anyone who ever uses casual, imprecise language in a skeet, this is not a website where you'll find happiness. Muted.
jwlockhart.bsky.social
A whole cookbook of just that.
Phyllis pellman good's fix it and forget it vegetarian cook book.
jwlockhart.bsky.social
As I said, you could have praised this in a friendly way, and I'd have agreed. Replying to a stranger pedantically, pretending not to understand me, and saying I'm wrong about my own work are bad for public conversations. Its more evidence for the critiques of "bropen science."
jwlockhart.bsky.social
What's fun about this theory is that it fits with how a bunch of crypto people pivoted to become AI people.
davekarpf.bsky.social
Everyone agrees that we're currently in a dotcom era-like AI bubble. People disagree what sort of bubble it is.

There are 3 stories one can tell about the dotcom crash: a startup story, a telecom story, and an accounting fraud story.

My take: it's giving Enron
open.substack.com/pub/davekarp...
It's Giving Enron
On the AI bubble, and the various echoes of the dotcom crash
open.substack.com
jwlockhart.bsky.social
Were you genuinely confused by my original post? If you understood me the first time but wanted to remind me that they're separate entities (which I know), I would not have minded you saying that directly. As is, your posts felt needlessly rude.
jwlockhart.bsky.social
My work includes the various other "*rxiv" servers out there (bio-, psych-, eng-, etc). Believe it or not, yours is not the only preprint service to use that name.
jwlockhart.bsky.social
Would be fascinating to see who comes into that kind of money and thinks "let's keep working"
jwlockhart.bsky.social
jwlockhart.bsky.social
Apparently it's something like $53 worth of staff time to process one reimbursement request here. (Yes they did the math and tell us about it annually)
jwlockhart.bsky.social
Apparently it's something like $53 worth of staff time to process one reimbursement request here. (Yes they did the math and tell us about it annually)
jwlockhart.bsky.social
How many copy editors for the price of one bari Weiss?
jwlockhart.bsky.social
(You could paste the whole document into the context window, but even the really big ones tend to get distracted and wander off topic when you give them that much text in a single prompt)

7/fin?
jwlockhart.bsky.social
Why does knowing how RAG works matter for doing lit review or summarizing documents? Well, for starters, the LLM literally didn't "see" the whole document. It peaked at a few parts that had similar words.

6/
jwlockhart.bsky.social
The LLM sees the system prompt, your question, and some text excerpts that use words most similar to your question. From that, it starts generating a response.

(This whole process is called RAG, if you want to Google more or see how the exact details vary between chatbots)

5/
jwlockhart.bsky.social
Step 5: a program finds the text chunks with numbers most like your question.

Step 6: a program copy-pastes the text of a few of the most similar paragraphs into a prompt, maybe some metadata like the document title or page number of the quotes, and your question, then gives it to the LLM.

4/
jwlockhart.bsky.social
Step 3: each chunk goes to an embedding model (think doc2vec or topic models) that gives a numeric summary of what the paragraph is kinda about.

Step 4: your question to the chatbot goes to the same model, gets its own summary numbers.

3/
jwlockhart.bsky.social
Step 1: user uploads, say, a pdf, or points the chatbot to a folder of them.

Step 2: a program (not an LLM) selects the text from the document and breaks it into chunks, usually around a paragraph in size.

2/