Ethan Mollick
@emollick.bsky.social
31K followers 150 following 1.7K posts
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence. Book: https://a.co/d/bC2kSj1 Substack: https://www.oneusefulthing.org/ Web: https://mgmt.wharton.upenn.edu/profile/emollick
Posts Media Videos Starter Packs
emollick.bsky.social
This isn’t a self report so feelings don’t mattet? And DiD seems very appropriate. Can you explain more?
emollick.bsky.social
AI is apparently already accelerating science as a co-intelligence.

Measuring academic publications of authors: “we find that productivity among GenAI users rose by 15 percent in 2023 relative to non-users and further increased to 36 percent in 2024” and the quality of publications also went up.
emollick.bsky.social
"An elaborate regency romance where everyone is a duck wearing a tiny human for a hat (each tiny human is also wearing a hat)"
emollick.bsky.social
"April 1805, Napoleon is now master of Europe. Oceans are now battlefields. Ducks are now boats"

(yes, yes, not Regency period, but close enough)
emollick.bsky.social
Agreed! And I learned about mauveine, from your post as well.
emollick.bsky.social
"Sora 2, an elaborate regency romance where everyone is wearing a live duck for a hat (each duck is also wearing a hat) , prestige drama"
emollick.bsky.social
Its not? Lots of findings indicating this
emollick.bsky.social
This paper shows that you can predict actual purchase intent (90% accuracy) by asking an off-the-shelf LLM to impersonate a customer with a demographic profile, giving it a product image & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick.bsky.social
On one hand: don't anthropomorphize AI. On the other: LLMs exhibit signs of gambling addiction.

The more autonomy they were given, the more risks the LLMs took. They exhibit gambler's fallacy, loss-chasing, illusion of control...

A cautionary note for using LLMs for investing without guardrails.
emollick.bsky.social
I think people are still unprepared for a world where you cannot trust any video content, despite years of warning.

Even when Google & OpenAI include watermarks, those can be easily removed, and open weights AI video models without guardrails are coming. www.404media.co/sora-2-water...
Sora 2 Watermark Removers Flood the Web
Bypassing Sora 2's rudimentary safety features is easy and experts worry it'll lead to a new era of scams and disinformation.
www.404media.co
emollick.bsky.social
Early evidence that AI agents in a guessing game develop emergent coordination and specialized roles, especially when assigned personas & prompted to consider other agents’ actions. There was no significant increase in accuracy but higher goal-directed behavior & teamwork. arxiv.org/abs/2510.05174
emollick.bsky.social
Paper showing what human work the American public thinks is morally permissible to replace with AI.

Surprisingly, people are already okay with AI doing 58% of occupations (if AI does it well/cheap). A floor of 12% of jobs (mostly caregiving & spiritual) would be morally repugnant to replace with AI
emollick.bsky.social
"Claude, write a two paragraph story proving Ted Chiang's point."

"Ah, but as an AI trying to write a good story, you ironically missed the point"
emollick.bsky.social
Sometimes I feel this way, too!
emollick.bsky.social
I think it is worth giving some frontier models a try for story writing, things have changed a lot, quickly. Now the failure modes for AI stories are actually interesting, as are the occasional successes.
emollick.bsky.social
Eh, only partially dunked my head in the bucket. Based on the research comparing human-written stories to AI and conversations with other writers, I think that the AI can occasionally hit good or moving stories (though often manipulative in nature)
emollick.bsky.social
This is an interesting debate about AI stories between an OpenAI researcher who works on AI writing and one of the greatest living short story writers.

Now that we have machines that can write novel stories, and increasingly very good or moving stories, we need to think more about what that means.
emollick.bsky.social
You will know the big AI labs get the actual source of most transformative AI use when they stop making “Dev Day” the main way they speak with & release products for users and start having “Non-technical Manager Day” as well (admittedly not a catchy name, but you get the idea)
emollick.bsky.social
A lot of people are worried about a flood of trivial but true findings, but we should be just as concerned about how to handle a flood of interesting and potentially true findings. The selection & canonization process in science has been collapsing already, with no good solution
emollick.bsky.social
Science isn't just a thing that happens. We can have novel discoveries flowing from AI-human collaboration every day (and soon, AI-led science), and we really have not built the system to absorb those results and translate them into streams of inquiry and translations to practice