Murat Kocaoglu
murat-kocaoglu.bsky.social
Murat Kocaoglu
@murat-kocaoglu.bsky.social
Asst. Prof. at Purdue ECE. Causal ML Lab. Causal discovery, causal inference, deep generative models, info theory, online learning. Past: MIT-IBM AI Lab, UT Austin, Koc, METU.
Thank you so much Naftali! Hope all is well
June 5, 2025 at 4:52 PM
Thank you!
June 3, 2025 at 11:26 AM
Bookmark our lab page and GitHub repo to follow our work:
muratkocaoglu.com/CausalML/
github.com/CausalML-Lab
CausalML Lab
muratkocaoglu.com
June 3, 2025 at 8:42 AM
CausalML Lab will continue to push the boundaries of fundamental causal inference and discovery research with an added focus on real-world applications and impact. If you are at Johns Hopkins @jhu.edu, or more generally on the East Coast, and are interested in collaborating, please reach out.
June 3, 2025 at 8:42 AM
I am also deeply grateful to Purdue University and @purdueece.bsky.social for their support during my first four years as a professor. I had the privilege of teaching enthusiastic undergrads, working with outstanding PhD students, and great colleagues there. I learned a great deal from them.
June 3, 2025 at 8:42 AM
Faithfulness can be relaxed significantly. Determinism (?) not sure what this means but I don't think it's equivalent to CMC. No unmeasured confounder is not necessary. You need some sparsity to observe some independence pattern, that's all; to learn something, but not necessarily everything.
January 25, 2025 at 3:23 AM
They used to give champagne glass? I was surprised to see something other than a mug this year.
December 28, 2024 at 5:29 PM
We will present this work at #NeurIPS2024 on Wednesday at 4:30pm local time in Vancouver. Poster #5107.

Led by my PhD students Zihan Zhou and Qasim Elahi.

Paper link:
openreview.net/forum?id=RfS...

Follow us for more updates from the #CausalML Lab!
Sample Efficient Bayesian Learning of Causal Graphs from Interventions
Causal discovery is a fundamental problem with applications spanning various areas in science and engineering. It is well understood that solely using observational data, one can only orient the...
openreview.net
December 10, 2024 at 5:13 PM
Wienöbst et al.'s way of uniformly sampling from Markov equivalent DAGs allows us to answer other interesting questions. We focus on estimating the causal effect of non-manipulable variables. We can learn the edges adjacent to this node (a graph cut) and use adjustment from obs.
December 10, 2024 at 5:13 PM
We then update the posteriors over each graph cut, which quickly converge to the true cut configurations. This gives us a sample-efficient way to learn causal graphs through interventions non-parametrically for discrete variables.

Green at the bottom is ours vs. some baselines.
December 10, 2024 at 5:13 PM
Assuming we have enough obs. data, we can compute the likelihood of our intv. samples given any graph cut, even though we don't know the graph. Two ways to do this are by unif. sampling causal graphs in poly-time thanks to Wienöbst et al. 2023, or by adjustment of Perkovic 2020.
December 10, 2024 at 5:13 PM
We leverage an idea from 2010s on learning causal graphs with small interventions. (n, k) separating systems cut every edge with interventions of size k. Each intervention gives info about a cut. We keep track of the posteriors of a set of graph cuts due to (n, k) sep. system.
December 10, 2024 at 5:13 PM
Bayesian approaches are promising since they can incorporate causal knowledge even in a single interventional sample. However they are computationally intensive to run on large graphs.

Instead of keeping track of all causal DAGs, can we keep track of a compact set of subgraphs?
December 10, 2024 at 5:13 PM
I will present this work at #NeurIPS2024 next Thursday at 11am local time in Vancouver. Poster #5104.

Led by my PhD student Qasim Elahi. Joint work with my colleague Mahsa Ghasemi.

Paper link:
openreview.net/forum?id=uM3...

Follow us for more updates from the #CausalML Lab!
Partial Structure Discovery is Sufficient for No-regret Learning in...
Causal knowledge about the relationships among decision variables and a reward variable in a bandit setting can accelerate the learning of an optimal decision. Current works often assume the causal...
openreview.net
December 8, 2024 at 7:36 PM
Finally, we have our bandit algorithm that can operate in unknown environments taking advantage of the fact that partial causal discovery is sufficient for achieving optimal regret, pseudocode below:
December 8, 2024 at 7:36 PM
A toy example from the paper: Missing V1 <--> V3 does not affect the possibly optimal minimal intervention sets (POMIS), missing any other bidirected edge does. So we don't need to allocate rounds in our causal bandit algorithm for learning this edge after learning the rest.
December 8, 2024 at 7:36 PM
We find that not all confounder locations are needed. You can get away with not learning some and you end up with the same POMIS set, which means you will never miss an optimal arm!

We propose an interventional causal discovery algorithm that takes advantage of this observation.
December 8, 2024 at 7:36 PM
If you don't know the causal graph, you may try to learn it from data and/or experiments. We identify an interesting research question here:

Do we need to know all unobserved confounders to learn all POMISes? Or can we get away without knowing some?

This is not obvious.
December 8, 2024 at 7:36 PM
A very nice idea is the POMIS developed by Sanghack Lee and Elias Bareinboim in 2018. They use do-calculus to eliminate unnecessary actions that give the same reward. They propose a principled algorithm to do this. And if you use less, they show you may miss the optimal arm.
December 8, 2024 at 7:36 PM
Most existing work focuses on how this action space reduction can be done algorithmically if you know the causal structure. In a semi-Markovian model, this includes the location of every unobserved confounder represented as bidirected edge.
December 8, 2024 at 7:36 PM