@ellis.eu PhD - visiting @avehtari.bsky.social 🇫🇮
🤔💭 Monte Carlo, probabilistic ML.
Interested in many things relating to probML, keen to learn applications in climate/science.
https://www.branchini.fun/about
TLDR;
To estimate µ = E_p[f(θ)] with SNIS, instead of doing MCMC on p(θ) or learning a parametric q(θ), we try MCMC directly on p(θ)| f(θ)-µ | (variance-minimizing proposal).
arxiv.org/abs/2505.00372
Topics: Multimodal foundation models, out-of-distribution deployable machine learning, collaborative machine learning
kaski-lab.com
Topics: Multimodal foundation models, out-of-distribution deployable machine learning, collaborative machine learning
kaski-lab.com
Others, please share
Apply through the ELLIS PhD program (dl October 31) ellis.eu/news/ellis-p...
arxiv.org/abs/2510.07559
The main problem we solve in it is to construct importance weights for Markov chain Monte Carlo. We achieve it via a method we call harmonization by coupling.
arxiv.org/abs/2510.07559
The main problem we solve in it is to construct importance weights for Markov chain Monte Carlo. We achieve it via a method we call harmonization by coupling.
Science frequently: it is week three of being unable to reproduce my experiment from six months ago. Was I wrong then or am I wrong now? Is it just noise? I have not seen the sun.
Science frequently: it is week three of being unable to reproduce my experiment from six months ago. Was I wrong then or am I wrong now? Is it just noise? I have not seen the sun.
It traces the core ideas that shaped diffusion modeling and explains how today’s models work, why they work, and where they’re heading.
www.arxiv.org/abs/2510.21890
It traces the core ideas that shaped diffusion modeling and explains how today’s models work, why they work, and where they’re heading.
www.arxiv.org/abs/2510.21890
This framing mitigates system 1 thinking of seeing "reject/accept" decisions
This framing mitigates system 1 thinking of seeing "reject/accept" decisions
'Zero variance self-normalized importance sampling via estimating equations'
- Art B. Owen
Even with optimal proposals, achieving zero variance with SNIS-type estimators requires some innovative thinking. This work explains how an optimisation formulation can apply.
'Zero variance self-normalized importance sampling via estimating equations'
- Art B. Owen
Even with optimal proposals, achieving zero variance with SNIS-type estimators requires some innovative thinking. This work explains how an optimisation formulation can apply.
Apply through the ELLIS PhD program (dl October 31) ellis.eu/news/ellis-p...
Apply through the ELLIS PhD program (dl October 31) ellis.eu/news/ellis-p...
"Value-aware Importance Weighting for Off-policy Reinforcement Learning"
proceedings.mlr.press/v232/de-asis...
"Value-aware Importance Weighting for Off-policy Reinforcement Learning"
proceedings.mlr.press/v232/de-asis...
statmodeling.stat.columbia.edu/2025/10/03/i...
statmodeling.stat.columbia.edu/2025/10/03/i...
Speakers include Lihua Xie, Karl H. Johansson, Jonathan How, Andrea Serrani, Carolyn L. Beck, and others.
#ControlTheory #AutomaticControl #AaltoUniversity #IEEE
Speakers include Lihua Xie, Karl H. Johansson, Jonathan How, Andrea Serrani, Carolyn L. Beck, and others.
#ControlTheory #AutomaticControl #AaltoUniversity #IEEE
The slides are now available here: fxbriol.github.io/pdfs/slides-....
The slides are now available here: fxbriol.github.io/pdfs/slides-....
Join us in 48 hours for a special announcement about Hollow Knight: Silksong!
Premiering here: youtu.be/6XGeJwsUP9c
Join us in 48 hours for a special announcement about Hollow Knight: Silksong!
Premiering here: youtu.be/6XGeJwsUP9c
www.youtube.com/watch?v=_fF6...
www.youtube.com/watch?v=mGuK...
www.youtube.com/watch?v=yRDa...
"I value more the finding of a truth, even if about something trivial, than the long disputing of the greatest questions without attaining any truth at all"
Feels like we could use some of that in research tbh..
"I value more the finding of a truth, even if about something trivial, than the long disputing of the greatest questions without attaining any truth at all"
Feels like we could use some of that in research tbh..
fjhickernell.github.io/mcm2025/prog...
Will give a talk on our recent/ongoing works on self-normalized importance sampling, including learning a proposal with MCMC and ratio diagnostics.
www.branchini.fun/pubs
fjhickernell.github.io/mcm2025/prog...
Will give a talk on our recent/ongoing works on self-normalized importance sampling, including learning a proposal with MCMC and ratio diagnostics.
www.branchini.fun/pubs
with P^{q} := ∫ p(y|θ) q(θ) , with q(θ) ≈ p(θ|D), then estimate _that_ with MC.
You know me. I don't get it.
What do I miss?
with P^{q} := ∫ p(y|θ) q(θ) , with q(θ) ≈ p(θ|D), then estimate _that_ with MC.
You know me. I don't get it.
What do I miss?
TLDR;
To estimate µ = E_p[f(θ)] with SNIS, instead of doing MCMC on p(θ) or learning a parametric q(θ), we try MCMC directly on p(θ)| f(θ)-µ | (variance-minimizing proposal).
arxiv.org/abs/2505.00372
TLDR;
To estimate µ = E_p[f(θ)] with SNIS, instead of doing MCMC on p(θ) or learning a parametric q(θ), we try MCMC directly on p(θ)| f(θ)-µ | (variance-minimizing proposal).
arxiv.org/abs/2505.00372