Laurence Aitchison
Laurence Aitchison
@laurenceai.bsky.social
Lecturer at the University of Bristol.

probabilistic ML, optimisation, interpretability, LLM evals.
Reposted by Laurence Aitchison
Our paper on the best way to add error bars to LLM evals is on arXiv! TL;DR: Avoid the Central Limit Theorem -- there are better, simple Bayesian and frequentist methods you should be using instead.

We also provide a super lightweight library: github.com/sambowyer/baye… 🧵👇
March 6, 2025 at 3:00 PM
Reposted by Laurence Aitchison
Go read it on arXiv! Thanks to my co-authors @sambowyer.bsky.social and @laurenceai.bsky.social 💥
March 6, 2025 at 3:00 PM
Reposted by Laurence Aitchison
📣 Jobs alert

We’re hiring postdoc and research engineer to work on UQ for LLMs!! Details ⬇️

#ai #llm #uq
Postdoctoral fellowships and research engineer positions available for an Oxford+Singapore project on uncertainty quantification in LLMs!

docs.google.com/document/d/1...

Oxford deadline is Feb 26. Pls apply if interested, forward to your contacts, contact me if you have questions 🙏🙏
LLM UQ Singapore+Oxford job ad
Postdoctoral Fellow and Research Engineer Positions on Uncertainty Quantification in LLMs in Oxford and Singapore By scaling up data, compute and model size, large language models (LLMs) have gained ...
docs.google.com
February 12, 2025 at 4:26 PM
Reposted by Laurence Aitchison
Do you know what rating you’ll give after reading the intro? Are your confidence scores 4 or higher? Do you not respond in rebuttal phases? Are you worried how it will look if your rating is the only 8 among 3’s? This thread is for you.
November 27, 2024 at 5:25 PM
Does anyone want to collaborate on an ICML position paper on "The impossibility of mathematically proving AI safety"? The basic thesis being that it is a category error to try to prove AI safety in the real world. (1/3)
November 27, 2024 at 10:44 AM