Malte Elson
banner
malte.the100.ci
Malte Elson
@malte.the100.ci
Professor at Uni Bern

Meta Science ⸾ Research Methods ⸾ IT Security & Privacy ⸾ Technology Effects

https://the100.ci & @error.reviews
So how does that work with regular meta analyses where follow up queries to original authors are common. I wouldn't be allowed to use any information obtained via email then?
February 13, 2026 at 11:41 PM
Shocking, to be honest! We acknowledge of course that researchers may be forced to comply with such policies, and hope that perhaps they may evolve over time.
February 13, 2026 at 11:39 PM
No need to publish the contents of the correspondence or quotes from the emails
February 13, 2026 at 9:25 PM
But you don't collect data on them! Or rather, the data you would publish is a Yes/No to the question "Are the data in fact available on request as promised?", which is not data about the authors.
February 13, 2026 at 9:21 PM
Reposted by Malte Elson
This is a no-brainer. Metascience is not accountable if it is not transparent about the research that it uses or critiques.
New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.
Against Anonymising Meta-Scientific Data: https://osf.io/6eyjf
February 13, 2026 at 6:49 PM
Reposted by Malte Elson
Today in that-didn't-happen: Cohen's d = 22.

Williams et al. (2014) has 145 citations, putting it in top 1% of most cited psych articles.

It is a load-bearing publication in its area, despite having impossible results.

pubpeer.com/publications...
February 13, 2026 at 4:51 PM
The alternative future is that meta-science builds community norms that embrace critical assessment. Reforms motivated by meta-science become easier to evaluate and easier to justify when the evidence is robust – but also easier to discard when reforms are counterproductive for the greater good
February 13, 2026 at 4:50 PM
In closing, we think there are two possible futures for meta-science: One where meta-science risks eroding the very basis on which it claims authority by being unverifiable. Reforms will either proceed without meta-science’s guidance, or it will be treated as just another opinionated subfield.
February 13, 2026 at 4:50 PM
From this follow simple recommendations: as a default, meta-scientific studies of published research artefacts need to include 1) a full, identifiable list of included studies, 2) the full coding instrument and decision rules, and 3) the individual ratings together with a codebook.
February 13, 2026 at 4:50 PM
Personal priorities come at epistemic costs, there is confusion about what (rather than whom) is being assessed, it's incoherent with other forms of critique, sets a dangerous precedent, and there is undeniably hypocrisy in asking for greater transparency, but not returning it in-kind.
February 13, 2026 at 4:50 PM
We acknowledge there are reasons for anonymising, such as fear of reputational harm (for the meta-researcher and others), fairness, peer review pressure, or policy compliance. We argue that the damaging long-term consequences of this practice outweigh these concerns, and offer counterarguments:
February 13, 2026 at 4:50 PM
As a widespread practice, there are substantial consequences to consider: No verification, no robustness checks, no way to expand research, missed opportunities for learning and training, and building a culture where errors are taboo.
February 13, 2026 at 4:50 PM
Meta-scientific studies, too, often involve quality ratings (e.g., reporting, compliance, trustworthiness) of public research artefacts (e.g. articles, preregs). Are these evaluations more sensitive? We don't think so, and yet, we increasingly observed that such data are anonymised before sharing.
February 13, 2026 at 4:50 PM
But even in Psychological Bulletin, quality assessments were done in 59/100 meta-analyses, and of those 50 had made their data available (47 included study identifiers). As such, /where/ quality ratings are shared, these typically include identifiers.
February 13, 2026 at 4:50 PM
1) All major guidelines on research synthesis (Cochrane, MARS) recommend that included studies are rated on quality, and that these should be transparent. We examine compliance in 100 reviews in Cochrane and Psychological Bulletin. Unsurprisingly, 100% of Cochrane reviews comply.
February 13, 2026 at 4:50 PM
New paper, on a worrying trend in meta-science: the practice of anonymising datasets on, e.g., published articles. We argue that this is at odds with norms established in research synthesis, explore arguments for anonymisation, provide counterpoints, and demonstrate implications and epistemic costs.
Against Anonymising Meta-Scientific Data: https://osf.io/6eyjf
February 13, 2026 at 4:50 PM
Reposted by Malte Elson
New blog post about the age-period-cohort identification problem!

In which, for the first time ever, I ask "What's the mechanism?" and also suggest that sometimes you may actually *not* be interested in causal inference.

www.the100.ci/2026/02/13/o...
One approach to the age-period-cohort problem: Just don’t.
Just to cause yourself more problems, you seek for something. But there is no need for you to seek anything. You have plenty, and you have just enough problems. Shunryū Suzuki in a 1971 talk A ...
www.the100.ci
February 13, 2026 at 2:33 PM
Reposted by Malte Elson
giving stats advice: *ina garten voice* if you can't {run complex statistical analysis} t-test is fine
February 12, 2026 at 10:42 PM
Reposted by Malte Elson
this is the adult version of writing to santa for gifts
Unfortunately, I am addicted to grant writing.
February 12, 2026 at 4:54 PM
Reposted by Malte Elson
Once you start thinking about implausibly large effect sizes, you can't stop spotting them around you or wondering how others aren't doing so.

Harkin et al.'s (2016) meta analysis has 740+ citations, but it reports Cohen d values as large as 14. 17 cases of d>4.

psycnet.apa.org/record/2015-...
APA PsycNet
psycnet.apa.org
February 12, 2026 at 3:06 PM
Reposted by Malte Elson
February 11, 2026 at 3:18 PM
Reposted by Malte Elson
Neil Nelson will give the first keynote of the day, about finding lies in the work of scientists. He argues we need to be able to detect what is not true when we interact with the scientific literature. #PSE8
February 12, 2026 at 10:23 AM
Reposted by Malte Elson
LLMs are very good at extracting information from academic articles. They are much better than even highly trained humans (our grad RAs had hundreds of hours of practice). And of course they're ~1000x cheaper and faster
We coded our ~100k articles using LLMs. Should you believe them? To answer this, we benchmarked 4 human RAs against 3 LLMs on their ability to recover ground truth article data. Details in the paper and appendices, but the LLMs did well and handily beat the highly trained humans.
February 11, 2026 at 5:08 PM
Reposted by Malte Elson
I have a new paper. We look at ~all stats articles in political science post-2010 & show that 94% have abstracts that claim to reject a null. Only 2% present only null results. This is hard to explain unless the research process has a filter that only lets rejections through.
February 11, 2026 at 5:00 PM