Joe Alderman
@jaldmn.bsky.social
220 followers 170 following 33 posts
Medical AI researcher* | Anaesthesia & critical care doctor | Triathlete (kinda). https://www.birmingham.ac.uk/staff/profiles/inflammation-ageing/alderman-joseph *the realistic, “let’s look at the evidence” kind…
Posts Media Videos Starter Packs
jaldmn.bsky.social
At a loose end this afternoon? Join @xiaoliu.bsky.social and I online at 1pm for a discussion about algorithmic bias and the STANDING Together recommendations

The STANDING Together recommendations give guidance on how to minimise risk of bias in AI health technologies
Tackling algorithmic bias by promoting transparency in health datasets (The STANDING Together recommendations)
Health data is highly complex and can be challenging to interpret without knowing the context in which it was created. Data biases can be encoded into
aiforgood.itu.int
jaldmn.bsky.social
This is wild!

Not that he uses it (many people do), but that the department judged FoI to apply here
Reposted by Joe Alderman
royalstatsoc.bsky.social
We're accepting entries for our 2025 Florence Nightingale Award for Excellence in Health and Care Analytics

Supported by @healthfoundation.bsky.social it celebrates practitioners whose work in data analytics has led to significant improvements in patient care in the UK
Florence Nightingale Award for Excellence in Health and Care Analytics now open
rss.org.uk
jaldmn.bsky.social
Yikes!
smcgrath.phd
🧪 A new study by @jdwilko.bsky.social et. al. finds flawed or fake research affects Cochrane reviews, a gold standard in medical reviews, with 25% of trials raising concerns.

A 21-question checklist and automated tools aim to safeguard global medical guidelines. 🩺 🛟
Giant study finds untrustworthy trials pollute gold-standard medical reviews
Two-year collaboration aims to create tools to help counter the tide of flawed research.
www.nature.com
jaldmn.bsky.social
@unisouthampton.bsky.social @who.int @moorfieldsbrc.bsky.social

Special thanks to our funders & supporters: The NHS AI Lab, The Health Foundation and the NIHR @healthfoundation.bsky.social @nihr.bsky.social

/end.
jaldmn.bsky.social
Last thing to say is an enormous THANK YOU to all who have contributed their time, energy and expertise to this work.

Thanks for STANDING Together with us these last few years 🥹

(@ing a few people below, but I don't have everyone added on BSky. Sorry if I missed anyone out)

12/
jaldmn.bsky.social
We hope STANDING Together helps everyone across the AI development lifecycle to make thoughtful choices about the way they use data, reducing the risk that biases in datasets feed through to biases in algorithms and downstream patient harm.

10/
jaldmn.bsky.social
These recommendations are the culmination of nearly 3 years of work by an international group of researchers, healthcare professionals, policy experts, funders, medical device regulators, AI/ML developers, and many more besides.

9/
jaldmn.bsky.social
STANDING Together = STANdards for data Diversity, INclusivity and Generalisability.

We have worked with >350 stakeholders from 58 countries to agree a set of recommendations to improve the documentation and use of health datasets.

8/
jaldmn.bsky.social
Key point: there is (probably) no such thing as a perfect dataset!

Knowledge of a dataset's limitations is not a negative - it is actually a positive, as steps might then be taken to mitigate any issues. Not knowing ≠ there are no issues...

7/
jaldmn.bsky.social
Those using datasets should carefully appraise the suitability of the dataset for their purpose, and consider how they might mitigate any biases or limitations contained within.

6/
jaldmn.bsky.social
To prevent this happening, it's really important that those creating datasets also supply documentation. This should transparently explain what data they contain, and describe any limitations or related issues which those using data should be aware of.

5/
jaldmn.bsky.social
There are lots of reasons why algorithms can be biased. One key driver is the data used to develop or evaluate them.

Biases in data can pass along the chain and drive biases in algorithms, leading to downstream issues which can be hard to predict in advance.

4/
jaldmn.bsky.social
BUT: these benefits are not guaranteed. In fact, there is growing evidence that medical AI works better for certain groups than others. This may contribute to health inequity and cause patients harm.

3/
jaldmn.bsky.social
The world of medical artificial intelligence is moving at a remarkable pace, with a dizzying range of AI/ML tools already available for use in patients' care today.

These tools are undoubtedly cool, and have great potential to improve health!

2/
Reposted by Joe Alderman
bmj.com
The BMJ @bmj.com · Dec 10
The variety and volume of direct-to-consumer medical tests has increased.

This Analysis argues that the public needs high quality information and effective communication about the evidence behind the marketing of these tests
www.bmj.com/content/387/...
Home medical testing kits
jaldmn.bsky.social
At a loose end this afternoon? Come join us at midday for a reflection on 2024 in the world of medical AI 👇

Link to join: turing-uk.zoom.us/j/9214978425...