Ethan Mollick
@emollick.bsky.social
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence.
Book: https://a.co/d/bC2kSj1
Substack: https://www.oneusefulthing.org/
Web: https://mgmt.wharton.upenn.edu/profile/emollick
Book: https://a.co/d/bC2kSj1
Substack: https://www.oneusefulthing.org/
Web: https://mgmt.wharton.upenn.edu/profile/emollick
I keep warning that so many of our systems are still built around the assumption that quality writing and analysis are costly and therefore meaningful signals.
Our systems are very much not ready for the revelation that this is no longer true, as this planning objection AI shows
Our systems are very much not ready for the revelation that this is no longer true, as this planning objection AI shows
November 9, 2025 at 11:39 PM
I keep warning that so many of our systems are still built around the assumption that quality writing and analysis are costly and therefore meaningful signals.
Our systems are very much not ready for the revelation that this is no longer true, as this planning objection AI shows
Our systems are very much not ready for the revelation that this is no longer true, as this planning objection AI shows
This is a cool paper showing that first-gen college students don't realize a lot of unwritten rules that lead to success (the value of internships, student clubs, letters from professors).
But giving them access to an LLM for guidance significantly closes the gap. mgcuna.github.io/website/JMP_...
But giving them access to an LLM for guidance significantly closes the gap. mgcuna.github.io/website/JMP_...
November 9, 2025 at 2:55 PM
This is a cool paper showing that first-gen college students don't realize a lot of unwritten rules that lead to success (the value of internships, student clubs, letters from professors).
But giving them access to an LLM for guidance significantly closes the gap. mgcuna.github.io/website/JMP_...
But giving them access to an LLM for guidance significantly closes the gap. mgcuna.github.io/website/JMP_...
Sora: "that infamous dramatic Oscar winning scene where the lead keeps getting hit by the boom mic but nobody notices"
November 5, 2025 at 4:32 AM
Sora: "that infamous dramatic Oscar winning scene where the lead keeps getting hit by the boom mic but nobody notices"
I have been writing for years about the fact that we are not ready for the destruction of costly signalling mechanisms. Writing used to be a way of measuring effort, ability and diligence. We still have no easy substitute
Now this paper confirms that cover letters have lost their value as predictor
Now this paper confirms that cover letters have lost their value as predictor
November 5, 2025 at 1:48 AM
I have been writing for years about the fact that we are not ready for the destruction of costly signalling mechanisms. Writing used to be a way of measuring effort, ability and diligence. We still have no easy substitute
Now this paper confirms that cover letters have lost their value as predictor
Now this paper confirms that cover letters have lost their value as predictor
The big article on data centers in the New Yorker is pretty good, which I wasn’t expecting given the reaction on X. Lots of talk of the good and bad of AI, and it covers both bubble & non-bubble arguments.
It also featured the best version of “I spoke to a local farmer about a data center”
It also featured the best version of “I spoke to a local farmer about a data center”
November 3, 2025 at 6:23 AM
The big article on data centers in the New Yorker is pretty good, which I wasn’t expecting given the reaction on X. Lots of talk of the good and bad of AI, and it covers both bubble & non-bubble arguments.
It also featured the best version of “I spoke to a local farmer about a data center”
It also featured the best version of “I spoke to a local farmer about a data center”
I don’t think how people are tracking how quickly this is happening, for better or worse.
November 2, 2025 at 11:59 PM
I don’t think how people are tracking how quickly this is happening, for better or worse.
Biggest gap between a brilliant passage written about a work of art and what you might expect the art to look like based on the passage?
From Walter Benjamin (the painting in the reply)
From Walter Benjamin (the painting in the reply)
November 1, 2025 at 7:22 PM
Biggest gap between a brilliant passage written about a work of art and what you might expect the art to look like based on the passage?
From Walter Benjamin (the painting in the reply)
From Walter Benjamin (the painting in the reply)
The challenge in learning using AI is very similar to the same learning issue discovered about internet search
When we are given answers we think we learn, but we don’t. Learning is work. However, things like the “learning modes” from the AI providers help, as does using AI for tutoring not answers
When we are given answers we think we learn, but we don’t. Learning is work. However, things like the “learning modes” from the AI providers help, as does using AI for tutoring not answers
October 31, 2025 at 1:55 PM
The challenge in learning using AI is very similar to the same learning issue discovered about internet search
When we are given answers we think we learn, but we don’t. Learning is work. However, things like the “learning modes” from the AI providers help, as does using AI for tutoring not answers
When we are given answers we think we learn, but we don’t. Learning is work. However, things like the “learning modes” from the AI providers help, as does using AI for tutoring not answers
Sora: “Tiktok style high energy video explainer about the spinning columns of penguins in the sky. The pillar has always been there.” Now do it as a conspiracy theorist. Now a conspiracy debunker. Now a travel influencer
We live in a strange time (not the penguin pillar. That has always been there)
We live in a strange time (not the penguin pillar. That has always been there)
October 31, 2025 at 1:40 AM
Sora: “Tiktok style high energy video explainer about the spinning columns of penguins in the sky. The pillar has always been there.” Now do it as a conspiracy theorist. Now a conspiracy debunker. Now a travel influencer
We live in a strange time (not the penguin pillar. That has always been there)
We live in a strange time (not the penguin pillar. That has always been there)
In discussions of AI and jobs, we put too much emphasis on the technology and not enough on the corporate leaders who are actually making decisions about what they want to do with AI & its implications.
It is a time where CEO vision matters a lot, and you can see a contrast in Amazon and Walmart
It is a time where CEO vision matters a lot, and you can see a contrast in Amazon and Walmart
October 30, 2025 at 2:01 PM
In discussions of AI and jobs, we put too much emphasis on the technology and not enough on the corporate leaders who are actually making decisions about what they want to do with AI & its implications.
It is a time where CEO vision matters a lot, and you can see a contrast in Amazon and Walmart
It is a time where CEO vision matters a lot, and you can see a contrast in Amazon and Walmart
These two paragraphs from an Anthropic study on AI introspection are worth a second to read.
I think it is fair to say that both conclusions are quite... controversial, but the paper makes an interesting attempt to back up these assertions with experiments. transformer-circuits.pub/2025/introsp...
I think it is fair to say that both conclusions are quite... controversial, but the paper makes an interesting attempt to back up these assertions with experiments. transformer-circuits.pub/2025/introsp...
October 29, 2025 at 6:29 PM
These two paragraphs from an Anthropic study on AI introspection are worth a second to read.
I think it is fair to say that both conclusions are quite... controversial, but the paper makes an interesting attempt to back up these assertions with experiments. transformer-circuits.pub/2025/introsp...
I think it is fair to say that both conclusions are quite... controversial, but the paper makes an interesting attempt to back up these assertions with experiments. transformer-circuits.pub/2025/introsp...
Among many enabling innovations for chatbots is the common cultural understanding & vast data of instant messaging
I sometimes think about the 19th century LLM, it would have been epistolary: “My dearest Claude, I write you with an unusual request to tell me the best Pokemon. Regards, AW”
I sometimes think about the 19th century LLM, it would have been epistolary: “My dearest Claude, I write you with an unusual request to tell me the best Pokemon. Regards, AW”
October 29, 2025 at 11:58 AM
Among many enabling innovations for chatbots is the common cultural understanding & vast data of instant messaging
I sometimes think about the 19th century LLM, it would have been epistolary: “My dearest Claude, I write you with an unusual request to tell me the best Pokemon. Regards, AW”
I sometimes think about the 19th century LLM, it would have been epistolary: “My dearest Claude, I write you with an unusual request to tell me the best Pokemon. Regards, AW”
Another example of the increasingly common situation where AI helps an academic with intellectually challenging work (solving a 42-year-old open math problem). Seems like real value in combining expert human guidance and increasingly powerful LLM. arxiv.org/abs/2510.23513
October 29, 2025 at 1:13 AM
Another example of the increasingly common situation where AI helps an academic with intellectually challenging work (solving a 42-year-old open math problem). Seems like real value in combining expert human guidance and increasingly powerful LLM. arxiv.org/abs/2510.23513
There are ways to address this problem with prompting and tooling (& more recent models do better in these tests), but current LLMs are pretty weak at dealing with time sequences where multiple documents (like court cases) have different time stamps and need to be understood in coherent sequence.
October 28, 2025 at 10:22 PM
There are ways to address this problem with prompting and tooling (& more recent models do better in these tests), but current LLMs are pretty weak at dealing with time sequences where multiple documents (like court cases) have different time stamps and need to be understood in coherent sequence.
New data on the corporate ROI from generative AI from a large-scale tracking survey by my colleagues at Wharton.
They found that 75% already have a positive return on investment from AI, less than 5% negative. Also 46% of businesses leaders use AI daily. knowledge.wharton.upenn.edu/special-repo...
They found that 75% already have a positive return on investment from AI, less than 5% negative. Also 46% of businesses leaders use AI daily. knowledge.wharton.upenn.edu/special-repo...
October 28, 2025 at 5:10 PM
New data on the corporate ROI from generative AI from a large-scale tracking survey by my colleagues at Wharton.
They found that 75% already have a positive return on investment from AI, less than 5% negative. Also 46% of businesses leaders use AI daily. knowledge.wharton.upenn.edu/special-repo...
They found that 75% already have a positive return on investment from AI, less than 5% negative. Also 46% of businesses leaders use AI daily. knowledge.wharton.upenn.edu/special-repo...
From this new post by OpenAI: 0.15% of users (something like 9M people given public numbers) show signs of suicidal intent in their ChatGPT chats each week
But there seems to be progress in making ChatGPT respond appropriately to mental health issues. openai.com/index/streng...
But there seems to be progress in making ChatGPT respond appropriately to mental health issues. openai.com/index/streng...
October 28, 2025 at 4:40 AM
From this new post by OpenAI: 0.15% of users (something like 9M people given public numbers) show signs of suicidal intent in their ChatGPT chats each week
But there seems to be progress in making ChatGPT respond appropriately to mental health issues. openai.com/index/streng...
But there seems to be progress in making ChatGPT respond appropriately to mental health issues. openai.com/index/streng...
The circle is complete
October 28, 2025 at 1:05 AM
The circle is complete
I don't think teachers and trainers have updated their view of prompting enough. Bigger models are better at figuring out intent, making prompt formulas less important. Reasoners eliminate the value of chain-of-thought prompting, etc
Context & communicating goals are now key to getting good results
Context & communicating goals are now key to getting good results
October 27, 2025 at 7:47 PM
I don't think teachers and trainers have updated their view of prompting enough. Bigger models are better at figuring out intent, making prompt formulas less important. Reasoners eliminate the value of chain-of-thought prompting, etc
Context & communicating goals are now key to getting good results
Context & communicating goals are now key to getting good results
I suspect that early 20th century modernists (and psychoanalysts) would been drawn to AI base models, as, in them, we have a true view into the fragmentary associative concepts beneath all human writing... and its weird.
Here is Llama 3.1 405B base, with the the prompt "a story about modernity:"
Here is Llama 3.1 405B base, with the the prompt "a story about modernity:"
October 26, 2025 at 4:37 PM
I suspect that early 20th century modernists (and psychoanalysts) would been drawn to AI base models, as, in them, we have a true view into the fragmentary associative concepts beneath all human writing... and its weird.
Here is Llama 3.1 405B base, with the the prompt "a story about modernity:"
Here is Llama 3.1 405B base, with the the prompt "a story about modernity:"
Sora is much closer to pulling off the prompt "someone playing a winning card in magic the gathering, close up on the board state as the card is played"
Interestingly, it makes up a card name and shows that fake card (in appropriate colors). Lots of little weirdness, but closer.
Interestingly, it makes up a card name and shows that fake card (in appropriate colors). Lots of little weirdness, but closer.
October 26, 2025 at 4:02 AM
Sora is much closer to pulling off the prompt "someone playing a winning card in magic the gathering, close up on the board state as the card is played"
Interestingly, it makes up a card name and shows that fake card (in appropriate colors). Lots of little weirdness, but closer.
Interestingly, it makes up a card name and shows that fake card (in appropriate colors). Lots of little weirdness, but closer.
Claude, GPT-5, Gemini, and Kimi: "write me a horror story done entirely in the dedications to six books (you can give me the title and author of each book as well)"
ChatGPT and Claude did pretty well in different ways. Kimi did the usual (sounds good but meaning falls apart).
ChatGPT and Claude did pretty well in different ways. Kimi did the usual (sounds good but meaning falls apart).
October 26, 2025 at 2:56 AM
Claude, GPT-5, Gemini, and Kimi: "write me a horror story done entirely in the dedications to six books (you can give me the title and author of each book as well)"
ChatGPT and Claude did pretty well in different ways. Kimi did the usual (sounds good but meaning falls apart).
ChatGPT and Claude did pretty well in different ways. Kimi did the usual (sounds good but meaning falls apart).
"It's like we summoned an eldritch abomination that exists beyond space and time, that breaks the fundamental laws of reality itself... and then we gave it a little hat."
One of the things that makes AI fun is that it generates its own easter eggs that reward weird exploration.
One of the things that makes AI fun is that it generates its own easter eggs that reward weird exploration.
October 24, 2025 at 4:27 AM
"It's like we summoned an eldritch abomination that exists beyond space and time, that breaks the fundamental laws of reality itself... and then we gave it a little hat."
One of the things that makes AI fun is that it generates its own easter eggs that reward weird exploration.
One of the things that makes AI fun is that it generates its own easter eggs that reward weird exploration.
It looks like AI music is following the same path as AI text:
1) Appears to have passed the Turing Test, people are only 50/50 in identifying older Suno vs. human songs (but 60/40 when two songs are the same genre)
2) Same fast development, new models are getting better quickly
(Suno is now at v5)
1) Appears to have passed the Turing Test, people are only 50/50 in identifying older Suno vs. human songs (but 60/40 when two songs are the same genre)
2) Same fast development, new models are getting better quickly
(Suno is now at v5)
October 23, 2025 at 11:23 PM
It looks like AI music is following the same path as AI text:
1) Appears to have passed the Turing Test, people are only 50/50 in identifying older Suno vs. human songs (but 60/40 when two songs are the same genre)
2) Same fast development, new models are getting better quickly
(Suno is now at v5)
1) Appears to have passed the Turing Test, people are only 50/50 in identifying older Suno vs. human songs (but 60/40 when two songs are the same genre)
2) Same fast development, new models are getting better quickly
(Suno is now at v5)
The fallout from the fact that data science/classical machine learning & generative AI are both called "AI" has been remarkably broad & persistent
Policy addresses the wrong harms, companies have been confused about who should lead efforts, hiring is misguided, academic discussion is often muddled.
Policy addresses the wrong harms, companies have been confused about who should lead efforts, hiring is misguided, academic discussion is often muddled.
October 22, 2025 at 5:18 PM
The fallout from the fact that data science/classical machine learning & generative AI are both called "AI" has been remarkably broad & persistent
Policy addresses the wrong harms, companies have been confused about who should lead efforts, hiring is misguided, academic discussion is often muddled.
Policy addresses the wrong harms, companies have been confused about who should lead efforts, hiring is misguided, academic discussion is often muddled.
Ruining more great art with Veo 3.1 (sound on)
October 22, 2025 at 5:07 AM
Ruining more great art with Veo 3.1 (sound on)