Arnab Sen Sharma
arnabsensharma.bsky.social
Arnab Sen Sharma
@arnabsensharma.bsky.social
PhD Student at Northeastern, working to make LLMs interpretable
Thanks to my collaborators Giordano Rogers, @natalieshapira.bsky.social, and @davidbau.bsky.social .

Checkout our paper for more details:

📜 arxiv.org/pdf/2510.26784
💻 github.com/arnab-api/fi...
🌐 filter.baulab.info
arxiv.org
November 4, 2025 at 5:56 PM
The fact that the neural mechanisms implemented in transformer architecture align with human-designed symbolic strategies suggests that certain computational patterns rise naturally from task demands rather than specific architectural constraints.
November 4, 2025 at 5:48 PM
This dual implementation of filtering: lazy evaluation via filter heads and eager evaluation by storing intermediate flags, echoes the lazy vs eager evaluation strategies in functional programming patterns.

Check Henderson & Morris Jr (1976): dl.acm.org/doi/abs/10....
November 4, 2025 at 5:48 PM
This seemingly innocent change in the prompt order fundamentally changes what strategy is used by the LLMs. This suggests that LLMs can maintain multiple strategies for the same task, and flexibly switch/prioritize them based on information availability.
November 4, 2025 at 5:48 PM
We validate this flag-based eager evaluation hypothesis with a series of carefully designed causal analysis. If we swap this flag with another item, in the question-before context the LM consistently picks the item carrying the flag. However, the question-after is not sensitive to this.
November 4, 2025 at 5:48 PM
🎭 Plot twist: when the question is presented *before* the options, the causality scores drops to near zero!

We investigate this further and find that when the question is presented first, the LM can can *eagerly* evaluate each option as it sees them, and store a "flag" directly in the latents.
November 4, 2025 at 5:48 PM
🔄 The predicate can also be transferred (to some extent) across different tasks - suggesting the LLMs rely on shared representations and mechanisms that are reused across tasks.

Also checkout @jackmerullo.bsky.social's work on LLM's reusing sub-circuits in different tasks.
x.com/jack_merull...
November 4, 2025 at 5:48 PM
When the question is presented *after* the options, filter heads can achieve high causality scores across language and format changes! This suggests that the encoded predicate is robust against such perturbations.
November 4, 2025 at 5:48 PM
We test this across a range of different semantic types, presentation formats, languages, and even different tasks that require a different "reduce" step after filtering.
November 4, 2025 at 5:48 PM
📊 We measure this with a *causality* score: if the predicate is abstractly encoded in the query states of these "filter heads", then transferring it should change the output. For example: in the figure the answer should change to "Peach" (or the changed format accordingly).
November 4, 2025 at 5:48 PM
🤔 But do these heads play a *causal* role in the operation?

To test them, we transport their query states from one context to another. We find that will trigger the execution of the same filtering operation, even if the new context has a new list of items and format!
November 4, 2025 at 5:48 PM
🔍 In Llama-70B and Gemma-27B, we found special attention heads that consistently focus their attention on the filtered items. This behavior seems consistent across a range of different formats and semantic types.
November 4, 2025 at 5:48 PM
We want to understand how large language models (LLMs) encode "predicates". Is every filtering question, e.g., find the X that satisfies property P, handled in a different way? Or has the LM learned to use abstract rules that can be reused in many different situations?
November 4, 2025 at 5:48 PM