Connor Lawless
banner
lawlessopt.bsky.social
Connor Lawless
@lawlessopt.bsky.social
Stanford MS&E Postdoc | Human-Centered AI & OR
Prev: @CornellORIE @MSFTResearch, @IBMResearch, @uoftmie 🌈
Thank you!!
November 18, 2025 at 12:33 AM
📕: EquivaMap: Leveraging LLMs for Automatice Equivalence Checking of Optimization formulations
(Joint work with @ellen-v.bsky.social @hzhai.bsky.social and @leqiliu.bsky.social )
🔗: arxiv.org/abs/2502.14760
EquivaMap: Leveraging LLMs for Automatic Equivalence Checking of Optimization Formulations
A fundamental problem in combinatorial optimization is identifying equivalent formulations. Despite the growing need for automated equivalence checks -- driven, for example, by optimization copilots, ...
arxiv.org
July 16, 2025 at 6:51 PM
Our empirical results highlight that existing pointwise approaches for recourse can fail to catch potential fixed predictions, whereas our approach (provably) succeeds!
July 14, 2025 at 4:15 PM
We model the problem as a mixed-integer quadratically constrained program that runs in seconds on real-world datasets.
July 14, 2025 at 4:15 PM
This paradigm lets us spot fixed predictions before deploying a model, lets us audit public models for recourse (even if we don't have any available data!), and gives interpretable summaries of regions with fixed predictions to help with debugging.
July 14, 2025 at 4:14 PM
In this paper, we introduce a new paradigm for algorithmic recourse that aims to certify recourse over an entire region of the feature space!
July 14, 2025 at 4:13 PM
Existing approaches to algorithmic recourse focus on verifying recourse on an individual-by-individual basis, which can cause model developers to miss potential fixed predictions, requires a lot of data, and makes it difficult to debug recourse issues!
July 14, 2025 at 4:12 PM
Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Think credit applicants that can never get a loan approved, or young patients that can never get an organ transplant - no matter how sick they are!
July 14, 2025 at 4:11 PM
This is my first time at an HCI conference - come say hi if you're around!
March 25, 2025 at 6:59 AM
In addition to a bunch of quantitative experiments, we ran a user study with a prototype system to inform design recommendations for future interactive optimization systems. Check out the paper for more details!
March 25, 2025 at 6:59 AM
We built a hybrid LLM and CP system that uses LLMs to translate user requests in chat into operations on an underlying CP optimization model to schedule a new meeting. This gets the best of both worlds - the flexibility of LLMs with the decision making power of optimization!
March 25, 2025 at 6:58 AM
Building optimization models in practice involves a ton of back and forth between optimization and domain experts to understand a decision making problem. Can we enable domain experts to craft their own optimization models instead? We study this through the lens of scheduling.
March 25, 2025 at 6:57 AM
In case youre wondering why this thread looks suspiciously like a bunch of screenshots from a presentation...

I'll be chatting about this project at the INFORMs Computing Society Conference in the debate room at 3. Come say hi!
March 16, 2025 at 5:50 PM
More broadly, this is a first step towards a new paradigm where we can exploit natural language information to do better algorithm configuration and design! There's tons of exciting open problems towards this goal (reach out if you're interested!).
March 16, 2025 at 5:49 PM
Surprisingly, we can get high performing configurations from our framework - outperforming solver defaults on a number of real world problems, without solving a single MILP!
March 16, 2025 at 5:48 PM
We introduce a LLM based framework with some algorithmic bells and whistles (ensembling, solver specific context...) to capitalize on LLM strengths while addressing these challenges.
March 16, 2025 at 5:47 PM
Unfortunately, LLMs aren't a natural fit for configuration. Parameters are problem specific, LLMs have stochastic outputs, and frankly - it's a tough problem!
March 16, 2025 at 5:46 PM
Can we get better problem-specific solver configurations without the big computational price tag?

In this paper we show that we can thanks to Large Language Models! Why LLMs? They can identify useful optimization structure and have a lot of built in math programming knowledge!
March 16, 2025 at 5:44 PM