Aditi Mavalankar
banner
aditimavalankar.bsky.social
Aditi Mavalankar
@aditimavalankar.bsky.social
Research Scientist at DeepMind working on Gemini Thinking
On my way to #ICML2025 to present our algorithm that strongly scales with inference compute, in both performance and sample diversity! 🚀

Reach out if you’d like to chat more!
July 13, 2025 at 12:26 PM
4) the responses produced by the model have high diversity for the more performant models.
March 17, 2025 at 11:16 AM
3) our approach exhibits strong scaling with inference-time compute, and even after 100+ LLM calls, we do not see plateauing in the scaling curve;
March 17, 2025 at 11:16 AM
2) we observe strong generalisation across datasets and models, implying that the process of curating these examples can be performed once and the benefits in performance can be reaped multiple times;
March 17, 2025 at 11:16 AM
Injecting different examples into the prompt has several benefits: 1) we see significant gains in performance compared to best-of-N and self-repair baselines on multiple model families: Gemini, Gemma, and GPT;
March 17, 2025 at 11:16 AM
For the coding domain, a golden example pair, or AuPair, contains the problem description, an incorrect guess, and a fix that improves the solution.
March 17, 2025 at 11:16 AM
Our submodular approach yields a fixed ordered set of complementary and useful AuPairs. For a budget of N LLM calls, the model is given N different prompts to answer the same question, where each prompt contains a different golden example.
March 17, 2025 at 11:16 AM