Laura
@lauraruis.bsky.social
PhD supervised by Tim Rocktäschel and Ed Grefenstette, part time at Cohere. Language and LLMs. Spent time at FAIR, Google, and NYU (with Brenden Lake). She/her.
Reposted by Laura
I really enjoyed my MLST chat with Tim @neuripsconf.bsky.social about the research we've been doing on reasoning, robustness and human feedback. If you have an hour to spare and are interested in AI robustness, it may be worth a listen 🎧
Check it out at youtu.be/DL7qwmWWk88?...
Check it out at youtu.be/DL7qwmWWk88?...
March 19, 2025 at 3:11 PM
I really enjoyed my MLST chat with Tim @neuripsconf.bsky.social about the research we've been doing on reasoning, robustness and human feedback. If you have an hour to spare and are interested in AI robustness, it may be worth a listen 🎧
Check it out at youtu.be/DL7qwmWWk88?...
Check it out at youtu.be/DL7qwmWWk88?...
"Rather than being animals that *think*, we are *animals* that think"; the last sentence of Tom Griffiths's characterisation of human intelligence through limited time, compute, and communication hits different today than it did 4 years ago.
December 22, 2024 at 11:04 AM
"Rather than being animals that *think*, we are *animals* that think"; the last sentence of Tom Griffiths's characterisation of human intelligence through limited time, compute, and communication hits different today than it did 4 years ago.
Sometimes o1's thinking time almost feels like a slight. o1 is like "oh I thought about this uninvolved question of yours for 7 seconds and here is my 20 page essay on it"
December 15, 2024 at 5:38 PM
Sometimes o1's thinking time almost feels like a slight. o1 is like "oh I thought about this uninvolved question of yours for 7 seconds and here is my 20 page essay on it"
Reposted by Laura
What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, we’ve just written a perspective (arxiv.org/abs/2412.03782) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7
The broader spectrum of in-context learning
The ability of language models to learn a task from a few examples in context has generated substantial interest. Here, we provide a perspective that situates this type of supervised few-shot learning...
arxiv.org
December 10, 2024 at 6:17 PM
What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, we’ve just written a perspective (arxiv.org/abs/2412.03782) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7
Reposted by Laura
Big congratulations to Dr. @jumelet.bsky.social for obtaining his PhD today and crafting a beautiful thesis full of original and insightful work!! 🎉 arxiv.org/pdf/2411.16433?
December 10, 2024 at 3:07 PM
Big congratulations to Dr. @jumelet.bsky.social for obtaining his PhD today and crafting a beautiful thesis full of original and insightful work!! 🎉 arxiv.org/pdf/2411.16433?
I'll be at NeurIPS tues-sun, send me a message if you'd like to chat!
December 8, 2024 at 4:51 PM
I'll be at NeurIPS tues-sun, send me a message if you'd like to chat!
Reposted by Laura
This is an incredible paper that I've longed to do for a long time. However the engineering challenges were far too daunting, so my collaborators and I settled for indirect evidence for this hypothesis instead (or did other things).
How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this:
Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️
Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️
November 30, 2024 at 5:10 PM
This is an incredible paper that I've longed to do for a long time. However the engineering challenges were far too daunting, so my collaborators and I settled for indirect evidence for this hypothesis instead (or did other things).
Do you know what rating you’ll give after reading the intro? Are your confidence scores 4 or higher? Do you not respond in rebuttal phases? Are you worried how it will look if your rating is the only 8 among 3’s? This thread is for you.
November 27, 2024 at 5:25 PM
Do you know what rating you’ll give after reading the intro? Are your confidence scores 4 or higher? Do you not respond in rebuttal phases? Are you worried how it will look if your rating is the only 8 among 3’s? This thread is for you.
How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this:
Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️
Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️
November 20, 2024 at 4:35 PM
How do LLMs learn to reason from data? Are they ~retrieving the answers from parametric knowledge🦜? In our new preprint, we look at the pretraining data and find evidence against this:
Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️
Procedural knowledge in pretraining drives LLM reasoning ⚙️🔢
🧵⬇️