Murat Kocaoglu
murat-kocaoglu.bsky.social
Murat Kocaoglu
@murat-kocaoglu.bsky.social
Asst. Prof. at Purdue ECE. Causal ML Lab. Causal discovery, causal inference, deep generative models, info theory, online learning. Past: MIT-IBM AI Lab, UT Austin, Koc, METU.
We then update the posteriors over each graph cut, which quickly converge to the true cut configurations. This gives us a sample-efficient way to learn causal graphs through interventions non-parametrically for discrete variables.

Green at the bottom is ours vs. some baselines.
December 10, 2024 at 5:13 PM
Finally, we have our bandit algorithm that can operate in unknown environments taking advantage of the fact that partial causal discovery is sufficient for achieving optimal regret, pseudocode below:
December 8, 2024 at 7:36 PM
A toy example from the paper: Missing V1 <--> V3 does not affect the possibly optimal minimal intervention sets (POMIS), missing any other bidirected edge does. So we don't need to allocate rounds in our causal bandit algorithm for learning this edge after learning the rest.
December 8, 2024 at 7:36 PM
With our method, we can quantify how spurious correlations in their training data affect large image generative models. For example, we can quantify how much changing the biological sex of a person affects their perceived age, a non-causal relation that shouldn't be there:
December 8, 2024 at 12:57 AM
With enough data, we know which causal questions can be answered and which cannot thanks to the ID algorithms of Tian & Pearl and Shpitser & Pearl. But these require the likelihood of complicated high dimensional distributions. These can't be explicitly learned from data, e.g.,
December 8, 2024 at 12:57 AM