Joachim Vandekerckhove 🏖️
banner
joachim.cidlab.com
Joachim Vandekerckhove 🏖️
@joachim.cidlab.com
Professor of #CogSci and #Stats @UCIrvine; Pursuer of Lofty Undertakings; Purveyor of Articles Odd and Quaint; and Protector of the Realm. #blm #trahr he/him
This probably merits study. But yeah, this setup doesn't nearly meet the requirements for the wisdom of the crowd effect
November 22, 2025 at 11:00 PM
Yeah, that's... that's a bummer
November 22, 2025 at 9:54 PM
Congratulations woooo 🎉🎉🎉🎉
November 21, 2025 at 9:21 PM
Another ambiguous case! 😃
November 19, 2025 at 7:02 AM
No! Clearly the world consists of infinitely many small likelihoods and a massive prior simplex over them!
Skub
The Perry Bible Fellowship
share.google
November 19, 2025 at 6:51 AM
... and just because this is such a fun paper -- the analysis here also completely blurs the line between model selection and parameter estimation
bsky.app/profile/joac...
This fun little preprint covers an experiment in my lab that was done 10 years ago. The topic was suggested by the first author, who was then a student in my undergraduate methods class. She wanted to conduct a test of a diagnostic procedure, "manual muscle testing" (MMT) that was used by 1/
Chance level performance in expert diagnoses with applied kinesiology: https://osf.io/kvdha
November 19, 2025 at 6:49 AM
But if p(weight parameters|model) is just weight=(0,1) if model 1, (1,0) if model 2, then it just reduces to having two likelihoods. It's an ambiguous case. There are lots of examples -- only the marginal is unambiguous because that is The Model
November 19, 2025 at 6:44 AM
The difference is in the joint (parameters, data) distribution. What counts as prior vs not is a little vague. You could be comparing two likelihoods, or a mixture of two likelihoods with different priors on weights
November 19, 2025 at 5:09 AM
Well, priors as opposed to likelihoods, is what I meant -- but it's better to think of them as inextricable
November 19, 2025 at 4:34 AM
Important to specify which posteriors you're talking about. The model selection posterior changes a lot!
November 19, 2025 at 1:13 AM
Ah, right -- yes, totally agree. Parameter priors matter whether nuisance or not, and if you change them you change the model
November 19, 2025 at 12:48 AM
I'm just pointing that out because a common misconception is that models don't change, in some fundamental way, if the difference is "merely" in the priors. But they do, and occasionally that difference is very consequential and of interest
November 19, 2025 at 12:36 AM
Right, there are four, two sets that differ only in the prior and two sets that differ only in the likelihood. But there's no rule that says one of those differences must be 0 or smaller than another difference. You just have different models
November 19, 2025 at 12:36 AM
I'm not sure how people think of this generally but in your example there are three rather different models being compared. Is that how you see it?
November 18, 2025 at 11:34 PM
It might be hard to conceive of the model you want, but that's just a fundamental hardness of inference, I'd say
November 18, 2025 at 3:32 PM