@ellis.eu PhD - visiting @avehtari.bsky.social 🇫🇮
🤔💭 Monte Carlo, probabilistic ML.
Interested in many things relating to probML, keen to learn applications in climate/science.
https://www.branchini.fun/about
And, the people reviewing will have a significant overlap (I guess) from NeurIPS/ICML/etc..
Still, worth trying to have (healthier?) incentives/structure as in TMLR
And, the people reviewing will have a significant overlap (I guess) from NeurIPS/ICML/etc..
Still, worth trying to have (healthier?) incentives/structure as in TMLR
Of course, I also like TMLR because of its spirit, which for me is about correctness and details rather than significance or importance, which are typically in the eyes of the (powerful) beholder.
Of course, I also like TMLR because of its spirit, which for me is about correctness and details rather than significance or importance, which are typically in the eyes of the (powerful) beholder.
'Zero variance self-normalized importance sampling via estimating equations'
- Art B. Owen
Even with optimal proposals, achieving zero variance with SNIS-type estimators requires some innovative thinking. This work explains how an optimisation formulation can apply.
'Zero variance self-normalized importance sampling via estimating equations'
- Art B. Owen
Even with optimal proposals, achieving zero variance with SNIS-type estimators requires some innovative thinking. This work explains how an optimisation formulation can apply.
epubs.siam.org/doi/abs/10.1...
epubs.siam.org/doi/abs/10.1...