Gordon Pennycook
banner
gordpennycook.bsky.social
Gordon Pennycook
@gordpennycook.bsky.social
Associate Professor, Psychology @cornelluniversity.bsky.social. Researching thinking & reasoning, misinformation, social media, AI, belief, metacognition, B.S., and various other keywords. 🇨🇦

https://gordonpennycook.com/
To find new music. Generally listen to it twice, removing songs I definitely don't like and then putting songs I like on a rolling "current tunes" Playlist (that contains new music that I like, until I grow weary of a song). The playlist is usually around 4 hours long
November 26, 2025 at 1:11 AM
I listen (essentially) every week. Release radar as well
November 26, 2025 at 1:06 AM
The “piggy” comment is really disturbing.

Yes it is bullying and just mean and disrespectful.

But it’s probably also indicative of a continuing loss of inhibitory control. And that’s a sign of cognitive decline.
I don't know why the "Piggy" thing is bothering me so much. It's one more unforgivable thing in a list of 20,000 unforgivable things, but I've been mad about it for like 12 straight hours.
November 20, 2025 at 8:28 PM
He makes a similar argument in the paper
November 19, 2025 at 2:28 PM
I'm saying that I don't think we can assume it's widespread given the technical hurdles. Also, it would be pretty shocking if the 1000s of responses that people are getting from online participants every day were replaced with AI and it escaped all of our attention. It's possible, but I'm skeptical.
November 18, 2025 at 11:15 PM
Not saying it isn't possible. Just doubtful that it's already happened and uncertain that it will happen
November 18, 2025 at 10:18 PM
I guess I'm working under the assumption that one would still need to manage the bots to some extent. And, to ruin the entire pool, it would either need to be some combination of very many people with a good number of bots, or a few people with a very large number of bots.
November 18, 2025 at 10:18 PM
Well, it's true that it's documented. The extent to which it's a huge problem is, in my view, debatable.
November 18, 2025 at 9:49 PM
MTurk, yes. But what we've gotten from Prolific has been decent, despite a lull that I think was caused by a viral TikTok event
November 18, 2025 at 9:45 PM
Fair fair. I'm just saying, absent positive evidence (and given the technical expertise required), we aren't in a position to therefore assume that this is a thing that is currently happening.
November 18, 2025 at 9:43 PM
That is, assuming a particular degree of technical sophistication, a person with that sophistication could presumably do a lot of profitable things with their time.

In a future where it's simple to do, the problem becomes much more acute obviously.
November 18, 2025 at 9:38 PM
Sure, but in the case of (1) it's a problem that could (in theory) be dealt with via replication (i.e., if it's a small number of bad actors, it's a numbers game). For (2), it depends on the opportunity costs of the ppl who have such skills. Are MTurk studies sufficiently profitable?
November 18, 2025 at 9:38 PM
That's fair - but I think there's a big difference between a small number of bad actors sometimes screwing up a study (a concern, but a manageable one, since things can be replicated etc.) and normal human participants being broadly replaced by AI bots (a catastrophic problem that destroys the pool)
November 18, 2025 at 9:35 PM
Of course, but that's still a potential *future* problem - we have no evidence to say that it is currently a problem. I don't think it's inevitable even in the tech is there, as participants also have an incentive to be genuine. If the pools get bad enough, then researchers will stop using them
November 18, 2025 at 9:31 PM