CausalPFN has a built-in calibration, and can make reliable estimations even for datasets that fall outside of its pretraining prior.
Try it using: pip install causalpfn
Made with ❤️ for better causal inference
[7/7]
#CausalInference #ICML2025
CausalPFN has a built-in calibration, and can make reliable estimations even for datasets that fall outside of its pretraining prior.
Try it using: pip install causalpfn
Made with ❤️ for better causal inference
[7/7]
#CausalInference #ICML2025
Our theory shows that posterior distribution of causal effects is consistent if and only if the pretraining data only includes identifiable causal structures.
👉 We show how to carefully design the prior, one of the key differences in our work relative to predictive PFNs. [6/7]
Our theory shows that posterior distribution of causal effects is consistent if and only if the pretraining data only includes identifiable causal structures.
👉 We show how to carefully design the prior, one of the key differences in our work relative to predictive PFNs. [6/7]
CausalPFN works out of the box on real-world data. On 5 real RCTs in marketing (Hillstrom, Criteo, Lenta, etc.), it outperforms baselines like X-/S-/DA-Learners on policy evaluation (Qini score). [5/7]
CausalPFN works out of the box on real-world data. On 5 real RCTs in marketing (Hillstrom, Criteo, Lenta, etc.), it outperforms baselines like X-/S-/DA-Learners on policy evaluation (Qini score). [5/7]
On IHDP, ACIC, Lalonde:
– Best avg. rank across many tasks
– Faster than all baselines
– No tuning needed compared to the baselines (that were tuned via cross-validation)
[4/7]
On IHDP, ACIC, Lalonde:
– Best avg. rank across many tasks
– Faster than all baselines
– No tuning needed compared to the baselines (that were tuned via cross-validation)
[4/7]
Causal inference traditionally needs domain expertise + hyperparameter tuning across dozens of estimators. CausalPFN flips this paradigm: we pay the cost once (at pretraining), then it’s ready to use out-of-the-box! [3/7]
Causal inference traditionally needs domain expertise + hyperparameter tuning across dozens of estimators. CausalPFN flips this paradigm: we pay the cost once (at pretraining), then it’s ready to use out-of-the-box! [3/7]
CausalPFN transforms effect estimation to a supervised learning problem. It's a transformer trained on millions of simulated datasets. It learns to map from data to treatment effect distributions directly. At test time, no finetuning and manual estimator selection are required. [2/7]
CausalPFN transforms effect estimation to a supervised learning problem. It's a transformer trained on millions of simulated datasets. It learns to map from data to treatment effect distributions directly. At test time, no finetuning and manual estimator selection are required. [2/7]
❤️ w/ Keertana Chidambaram, Viet Nguyen, @rahulgk.bsky.social , and Vasilis Syrgkanis
Link to paper: arxiv.org/abs/2404.07266
❤️ w/ Keertana Chidambaram, Viet Nguyen, @rahulgk.bsky.social , and Vasilis Syrgkanis
Link to paper: arxiv.org/abs/2404.07266