Ethan Mollick
@emollick.bsky.social
31K followers 150 following 1.7K posts
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence. Book: https://a.co/d/bC2kSj1 Substack: https://www.oneusefulthing.org/ Web: https://mgmt.wharton.upenn.edu/profile/emollick
Posts Media Videos Starter Packs
emollick.bsky.social
Two prompts: "Claude. make the most recursive and self-referential presentation you can imagine. seriously go big with this, don't just run with your first idea, revise it multiple times (ironic instruction, yes)"

"come on, i said recursive and self-referential. improve it. (see what i did there?)"
emollick.bsky.social
On AI & water usage, it looks like all US data center usage (not just AI) ranges from 628M gallons a day (counting evaporation from dam reservoirs used for hydro-power) to 200-275M with power but not dam evaporation, to 50M for cooling alone.

So not nothing, but also a lot less than golf courses.
emollick.bsky.social
This relatively short essay by Jack Clark (from OpenAI & Anthropic) is a good indicator of the attitude of many people inside the AI labs, and what they think is happening right now in AI.

You do not have to believe him, of course, but it is worth noting: importai.substack.com/p/import-ai-...
emollick.bsky.social
From a professor of math and engineering at Penn
emollick.bsky.social
I don't think people have updated enough on the capability gain in LLMs, which (despite being bad at math a year ago) now dominate hard STEM contests: gold medals in the International Math Olympiad, the International Olympiad on Astronomy & Astrophysics, International Informatics Olympiad...
emollick.bsky.social
This isn’t a self report so feelings don’t mattet? And DiD seems very appropriate. Can you explain more?
emollick.bsky.social
AI is apparently already accelerating science as a co-intelligence.

Measuring academic publications of authors: “we find that productivity among GenAI users rose by 15 percent in 2023 relative to non-users and further increased to 36 percent in 2024” and the quality of publications also went up.
emollick.bsky.social
"An elaborate regency romance where everyone is a duck wearing a tiny human for a hat (each tiny human is also wearing a hat)"
emollick.bsky.social
"April 1805, Napoleon is now master of Europe. Oceans are now battlefields. Ducks are now boats"

(yes, yes, not Regency period, but close enough)
emollick.bsky.social
Agreed! And I learned about mauveine, from your post as well.
emollick.bsky.social
"Sora 2, an elaborate regency romance where everyone is wearing a live duck for a hat (each duck is also wearing a hat) , prestige drama"
emollick.bsky.social
Its not? Lots of findings indicating this
emollick.bsky.social
This paper shows that you can predict actual purchase intent (90% accuracy) by asking an off-the-shelf LLM to impersonate a customer with a demographic profile, giving it a product image & having it give its impressions, which another AI rates.

No fine-tuning or training & beats classic ML methods.
emollick.bsky.social
On one hand: don't anthropomorphize AI. On the other: LLMs exhibit signs of gambling addiction.

The more autonomy they were given, the more risks the LLMs took. They exhibit gambler's fallacy, loss-chasing, illusion of control...

A cautionary note for using LLMs for investing without guardrails.
emollick.bsky.social
I think people are still unprepared for a world where you cannot trust any video content, despite years of warning.

Even when Google & OpenAI include watermarks, those can be easily removed, and open weights AI video models without guardrails are coming. www.404media.co/sora-2-water...
Sora 2 Watermark Removers Flood the Web
Bypassing Sora 2's rudimentary safety features is easy and experts worry it'll lead to a new era of scams and disinformation.
www.404media.co
emollick.bsky.social
Early evidence that AI agents in a guessing game develop emergent coordination and specialized roles, especially when assigned personas & prompted to consider other agents’ actions. There was no significant increase in accuracy but higher goal-directed behavior & teamwork. arxiv.org/abs/2510.05174
emollick.bsky.social
Paper showing what human work the American public thinks is morally permissible to replace with AI.

Surprisingly, people are already okay with AI doing 58% of occupations (if AI does it well/cheap). A floor of 12% of jobs (mostly caregiving & spiritual) would be morally repugnant to replace with AI
emollick.bsky.social
"Claude, write a two paragraph story proving Ted Chiang's point."

"Ah, but as an AI trying to write a good story, you ironically missed the point"