Burak Varıcı
burakvarici.bsky.social
Burak Varıcı
@burakvarici.bsky.social
Postdoc at CMU, previously PhD at RPI. Causality, representation learning, misc.
Joint work w/ Runtian Zhai (lead author), Kai Yang, Che-Ping Tsai, Zico Kolter, and Pradeep Ravikumar.
July 15, 2025 at 5:54 PM
Informally, we learn the linear span of the top singular functions of an implicit kernel induced by the input-context pair.

We show that this is the case for various learning objectives, explore when it is optimal, provide empirical evidence, and propose a practically useful metric.
July 15, 2025 at 5:54 PM
4) 𝗖𝗮𝘂𝘀𝗮𝗹 𝗯𝗮𝗻𝗱𝗶𝘁𝘀: I'll also present our JMLR paper on causal bandits for linear SEMs (thanks to journal-to-conference track!)

Paper: arxiv.org/abs/2208.12764
Poster: West Ballroom A-D #5000 Wed 4.30pm

Joint work w/ @atajer.bsky.social, Karthikeyan Shanmugam, and Prasanna Sattigeri
Causal Bandits for Linear Structural Equation Models
This paper studies the problem of designing an optimal sequence of interventions in a causal graphical model to minimize cumulative regret with respect to the best intervention in hindsight. This is, ...
arxiv.org
December 9, 2024 at 12:45 AM
We establish the necessary and sufficient conditions on the level of interventions required. Then, we propose a learning algorithm and analyze its optimality (gap).

Joint work w/ @atajer.bsky.social, Dmitriy Katz, Dennis Wei, and Prasanna Sattigeri
December 9, 2024 at 12:45 AM
3) 𝗖𝗮𝘂𝘀𝗮𝗹 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆 𝗳𝗿𝗼𝗺 𝗺𝗶𝘅𝘁𝘂𝗿𝗲𝘀 is fundamentally more challenging than learning a single DAG. We look into using interventions for causal discovery from a mixture of DAGs.

Paper: arxiv.org/abs/2406.08666
Poster: West Ballroom A-D #5006 Thu 4.30pm
Interventional Causal Discovery in a Mixture of DAGs
Causal interactions among a group of variables are often modeled by a single causal graph. In some domains, however, these interactions are best described by multiple co-existing causal graphs, e.g., ...
arxiv.org
December 9, 2024 at 12:45 AM
We design a finite-sample CRL algorithm for linear transforms, and establish its sample complexity for i) generic, and ii) RKHS-based score estimators.

Both CRL papers are joint worth w/ @atajer.bsky.social , Emre Acartürk, and Karthikeyan Shanmugam.
December 9, 2024 at 12:45 AM

2) 𝗦𝗮𝗺𝗽𝗹𝗲 𝗰𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗼𝗳 𝗖𝗥𝗟: In the paper led by Emre Acartürk, we look into Sample Complexity of Interventional CRL.

Paper: openreview.net/forum?id=XL9...
Poster: West Ballroom A-D #5002 Wed 11am
Sample Complexity of Interventional Causal Representation Learning
Consider a data-generation process that transforms low-dimensional _latent_ causally-related variables to high-dimensional _observed_ variables. Causal representation learning (CRL) is the process...
openreview.net
December 9, 2024 at 12:45 AM
But without knowing the multi-node intervention targets, how do we find the correct combinations of given score functions? The key is the dimension of newly constructed score difference images.
December 9, 2024 at 12:45 AM
Main idea: Combinations of score functions of multi-node interventions => new interventions with desired properties, e.g. sparsity. Intuitively, with diverse multi-node interventions (e.g. in the figure), we can synthesize any desired intervention!
December 9, 2024 at 12:45 AM
1) 𝗠𝘂𝗹𝘁𝗶-𝗻𝗼𝗱𝗲 𝗖𝗥𝗟: Existing work (mostly) assumes atomic interv. For linear transforms, we prove that unknown multi-node interv. can guarantee the same results as single-node interventions!

Paper: arxiv.org/abs/2406.05937
Poster: West Ballroom A-D #5005 Wed 11am
Linear Causal Representation Learning from Unknown Multi-node Interventions
Despite the multifaceted recent advances in interventional causal representation learning (CRL), they primarily focus on the stylized assumption of single-node interventions. This assumption is not va...
arxiv.org
December 9, 2024 at 12:45 AM
true, I need to practice that more often :)
December 5, 2024 at 1:57 PM