Nicola Branchini
nicolabranchini.bsky.social
Nicola Branchini
@nicolabranchini.bsky.social
🇮🇹 Stats PhD @ University of Edinburgh 🏴󠁧󠁢󠁳󠁣󠁴󠁿

@ellis.eu PhD - visiting @avehtari.bsky.social 🇫🇮

🤔💭 Monte Carlo, probabilistic ML.

Interested in many things relating to probML, keen to learn applications in climate/science.

https://www.branchini.fun/about
Like, it's clearly a thing in the culture to "implement algo X, from scratch in numpy / jax etc !" .. because a popular algorithm is so obscurely presented in papers (with a lot of mathiness and information theoretic principles yada yada) that this needs to be a thing
November 22, 2025 at 9:40 PM
I'm afraid there is often also an element of writing to create a tribe and looking smart, instead of writing for clarity / being understood etc.
November 22, 2025 at 9:37 PM
It is a very unfortunate trait of much of ML lit, that algorithmic details and implementations are hidden in badly written supplementaries, or much worse in large codebases. It can be very different to read an algo proposed in siam journal or sigproc. vs in ML.
November 22, 2025 at 9:37 PM
Harsh. I choose to work with person X or Y because I really like their topic !
November 17, 2025 at 5:06 PM
I didn't even mention the concept of "compiti delle vacanze", holiday homework ...
November 3, 2025 at 10:54 AM
If the reviewer / meta-reviewer does a bad job / is not fit for the paper, there is no structure that will fix it of course.
And, the people reviewing will have a significant overlap (I guess) from NeurIPS/ICML/etc..
Still, worth trying to have (healthier?) incentives/structure as in TMLR
October 21, 2025 at 2:19 PM
and forces the reviewer to focus on requested changes.
Of course, I also like TMLR because of its spirit, which for me is about correctness and details rather than significance or importance, which are typically in the eyes of the (powerful) beholder.
October 21, 2025 at 9:45 AM
Reposted by Nicola Branchini
24. arxiv.org/abs/2510.00389
'Zero variance self-normalized importance sampling via estimating equations'
- Art B. Owen

Even with optimal proposals, achieving zero variance with SNIS-type estimators requires some innovative thinking. This work explains how an optimisation formulation can apply.
October 4, 2025 at 4:03 PM
"Conditional Causal Discovery"

(don't be fooled by the title :D )

openreview.net/forum?id=6IY...
October 4, 2025 at 4:01 PM
"Estimating the Probabilities of Rare Outputs in Language Models"

arxiv.org/abs/2410.13211
October 4, 2025 at 4:01 PM
"Stochastic Optimization with Optimal Importance Sampling"

arxiv.org/abs/2504.03560
October 4, 2025 at 4:01 PM