Danny Maupin
@dmaupin.bsky.social
110 followers 1K following 140 posts
🔬Research Fellow in Health Science University of Surrey 🩺Specialist Vestibular Physiotherapist
Posts Media Videos Starter Packs
dmaupin.bsky.social
Impactful publications but that is hard to measure
dmaupin.bsky.social
Particularly if all that is being churned out is junk or fast churn science. It's easy to use AI to develop about x predicts y and another paper about z predicts why or redundant publications. This has been a problem before AI but is accelerated now. Looking forward to it publishing more robust /
dmaupin.bsky.social
No I don't think it would be, just write things vague enough
dmaupin.bsky.social
You think he will bother pre-registering? I'm doubtful
dmaupin.bsky.social
I may be confusing the proof-evidence distinction so please educate me if I am, but I don't think we make proof
dmaupin.bsky.social
The hypothesis remark was specific to a reply about looking for a hypothesized effect.

I don't think you make proof if you're doing research without a hypothesis though? You are discovering proof that exists. I think people take issue with the word make particularly with someone openly biased
dmaupin.bsky.social
The same page about the idea being ridiculous
dmaupin.bsky.social
So I still take issue with the make proof aspect, we could test the hypothesis, collect data all that but make proof has a different connotation especially when said by someone openly biased.

I think this conversation is in good fun as it's interesting debating science philosophy and glad we are /
dmaupin.bsky.social
But we shouldn't look for hypothesized effects. We make a hypothesis and then we test it. That's different. And even the species example the more I think about it isn't making proof, we are looking for evidence yes but we don't make the proof. The proof has existed previously we just didn't see it /
dmaupin.bsky.social
using science as a monolith when there are examples that prove otherwise so they probably should've been more specific
dmaupin.bsky.social
I'd say that's a little disingenuous because we aren't looking at discovering new species, this is specifically health research, and not health research that is looking for a new disease, but looking at causality of a condition with multiple confounders. The other poster is being too broad /
dmaupin.bsky.social
Not that it always is as researchers often have biases, but this feels like poor wording from individuals that don't seem concerned with gold standard science despite them saying they are
dmaupin.bsky.social
I think people take issue with the wording due to the belief that they are going to interfere with any data or conclusions to back up their point, thus making proof.

If I understand correctly you are saying science is always looking to make proof, but ideally this will be proof for or against /
dmaupin.bsky.social
Congratulations Niall! Looking forward to working alongside you
dmaupin.bsky.social
Excited to be a part of this cohort and can't wait to work with fellow colleagues to explore this area further

#academicsky #metascience
ukri.org
We're pleased to be funding early career researchers to explore how AI is transforming science.

The UK Metascience Unit has announced a cohort of 29 researchers that will receive funding through the AI Metascience Fellowship Programme.

Read more: www.ukri.org/news/interna...
International fellowships to explore AI’s impact on science
New £4 million programme funds early career researchers in the UK, US and Canada to investigate how artificial intelligence (AI) is transforming science.
www.ukri.org
dmaupin.bsky.social
Really good blog, enjoyed how you didn't completely dismiss the appeal of using LLMs. I do worry about ChatGPT being done for stats, particularly as less stats disciplines use it. What happens if they say o just a do t test on this data cause that's what I know even if it's not ideal for the ?
dmaupin.bsky.social
On impactful outputs.
dmaupin.bsky.social
That makes sense. I guess my concern (mentioned in that thread discussion) is that single papers often become such hot topic despite being done poorly (to be fair different issue) or not being replicated but the idea sticks in the public. This may be more of a fault of science as a whole with focus
dmaupin.bsky.social
Looking at this. Am I right in assuming that you are not worried about replication crises because that's what science should do? Continue to iterate and drop ideas that consistently don't work even if there are odd results that do?

I get too that this doesn't mean fraud stat magic etc.
dmaupin.bsky.social
Interesting thread and thank you for sharing! My first thought when it comes to replication is being able to produce the same result with the same data as described in methods though I know this is not always what is assessed. Your thread has communicated well the idea the variation in studies 1/
dmaupin.bsky.social
I like the idea of a week long process to write a 10 page recommendation letter in Stanford law. Seems like a good way to evaluate someone's contributions though needs to be done well to minimise bias
rorinstitute.bsky.social
“Using metrics to assess researchers can be ‘very dodgy terrain,’” says @jameswilsdon.bsky.social.

Great overview of how research assessment is changing worldwide in @nature.com , featuring recent work by RoRI with the Global Research Council: www.nature.com/articles/d41...
dmaupin.bsky.social
Interesting, I'll have to play around with it and watch some tutorials. Thank you!
dmaupin.bsky.social
How do you like positron compared to RStudio??