Kaustubh Sridhar
kaustubhsridhar.bsky.social
Kaustubh Sridhar
@kaustubhsridhar.bsky.social
Research Scientist at Google Deepmind.

Prev: @UPenn @Amazon @IITBombay

http://kaustubhsridhar.github.io/
Bsky doesn’t want you to see the awesome gifs! Find them on our website: bit.ly/regent-research
December 14, 2024 at 10:26 PM
This whole project would not have been possible without Souradeep Dutta, Dinesh Jayaraman, and Insup Lee.

We have many more results, ablations, code, dataset, model, and the paper at our website: bit.ly/regent-research

The arxiv link: arxiv.org/abs/2412.04759
December 14, 2024 at 9:50 PM
REGENT is far from perfect.

It cannot generalize to new embodiments (unseen mujoco envs) or long-horizon envs (like spaceinvaders & stargunner). It cannot generalize to completely new suites (i.e. requires similarities between pre-training and unseen envs).

Few failed rollouts:
December 14, 2024 at 9:50 PM
Here is a qualitative visualization of deploying REGENT in the unseen atari-pong environment.
December 14, 2024 at 9:50 PM
While REGENT’s design choices are aimed at generalization, its gains are not limited to unseen environments: it even performs better than current generalist agents when deployed within the pre-training environments.
December 14, 2024 at 9:50 PM
In the four unseen ProcGen environments, REGENT also outperforms the only other generalist agent, MTT, that can generalize to unseen environments via in-context learning. REGENT does so with an OOM less pretraining data and 1/3rd the number of params.
December 14, 2024 at 9:50 PM
REGENT also outperforms the ‘All Data’ variants of JAT/Gato which were pre-trained on 5-10x the amount of data.

For context, the Multi-Game DT uses 1M states to finetune to new atari envs. REGENT generalizes via RAG from ~10k states. REGENT Finetuned further improves over REGENT
December 14, 2024 at 9:50 PM
In the unseen metaworld & atari envs in the Gato setting, REGENT and R&P outperform SOTA generalist agents like JAT/Gato (the open source reproduction of Gato). REGENT outperforms JAT/Gato even after JAT/Gato is finetuned on data from the unseen envs.
December 14, 2024 at 9:50 PM
We also evaluate on unseen levels and unseen environments in the ProcGen setting.
December 14, 2024 at 9:50 PM
We evaluate REGENT on unseen robotics and game environments in the Gato setting.
December 14, 2024 at 9:50 PM
REGENT has a few key ingredients, including an interpolation between R&P and the transformer. This allows the transformer to more readily generalize to unseen envs, since it is given the easier task of predicting the residual to the R&P action rather than the complete action.
December 14, 2024 at 9:50 PM
R&P simply picks the nearest retrieved state s′ to the query state st, and plays the corresponding action a’.

REGENT retrieves the 19 closest states, throws the corresponding (s, r, a) tuples in the context with query (st, rt-1), and acts via in-context learning in unseen envs.
December 14, 2024 at 9:50 PM
Inspired by RAG and the success of a simple retrieval-based 1-nearest neighbor baseline that we call Retrieve-and-Play (R&P),

REGENT pretrains a transformer policy whose inputs are not just the query state st and previous reward rt-1, but also retrieved tuples of (state, previous reward, action).
December 14, 2024 at 9:50 PM
REGENT is pretrained on data from many training envs (left). REGENT is then deployed on the held-out envs (right) with a few demos from which it can retrieve states, rewards, and actions to use for in-context learning. **It never finetunes on the demos in the held-out envs.**
December 14, 2024 at 9:50 PM
Bluesky doesn't want you to see these gifs! :) Please see the rollouts in unseen environments in our website: bit.ly/regent-research
REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context In New Environments.
REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context In New Environments.
bit.ly
December 14, 2024 at 7:43 PM
We are also presenting REGENT in the Adaptive Foundation Models (today afternoon, Saturday Dec 14) and Open World Agents (tomorrow afternoon, Sunday Dec 15) workshops in NeurIPS. Please come by if you’d like to hear more!
December 14, 2024 at 7:39 PM
This whole project would not have been possible without Souradeep Dutta, Dinesh Jayaraman, and Insup Lee.

We have many more results, ablations, code, dataset, model, and the paper at our website: bit.ly/regent-research

The arxiv link: arxiv.org/abs/2412.04759
REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context In New Environments.
REGENT: A Retrieval-Augmented Generalist Agent That Can Act In-Context In New Environments.
bit.ly
December 14, 2024 at 7:39 PM
REGENT is far from perfect.

It cannot generalize to new embodiments (unseen mujoco envs) or long-horizon envs (like spaceinvaders & stargunner). It cannot generalize to completely new suites (i.e. requires similarities between pre-training and unseen envs).

Few failed rollouts:
December 14, 2024 at 7:39 PM
What would deep thought cost for the ultimate question? bsky.app/profile/nato...
chatgpt pro / o1 pro mode at $200 per month has got to be a step change in usefulness above Claude Sonnet to be worth it. I'm doubtful, but hey if they pull it off, I'll pay.
December 5, 2024 at 5:27 PM