Lisa Alazraki
banner
lisaalaz.bsky.social
Lisa Alazraki
@lisaalaz.bsky.social
PhD student @ImperialCollege. Research Scientist Intern @Meta prev. @Cohere, @GoogleAI. Interested in generalisable learning and reasoning. She/her

lisaalaz.github.io
We have released #AgentCoMa, an agentic reasoning benchmark where each task requires a mix of commonsense and math to be solved 🧐

LLM agents performing real-world tasks should be able to combine these different types of reasoning, but are they fit for the job? 🤔

🧵⬇️
August 28, 2025 at 2:01 PM
Thrilled to share our new preprint on Reinforcement Learning for Reverse Engineering (RLRE) 🚀

We demonstrate that human preferences can be reverse engineered effectively by pipelining LLMs to optimise upstream preambles via reinforcement learning 🧵⬇️
May 22, 2025 at 3:01 PM
I’ll be presenting Meta-Reasoning Improves Tool Use in Large Language Models at #NAACL25 tomorrow Thursday May 1st from 2 until 3.30pm in Hall 3! Come check it out and have a friendly chat if you’re interested in LLM reasoning and tools 🙂 #NAACL
April 30, 2025 at 8:58 PM
Reposted by Lisa Alazraki
Excited to share our ICLR and NAACL papers! Please come and say hi, we're super friendly :)
April 22, 2025 at 6:42 PM
New work led by @mercyxu.bsky.social
Check out the poster presentation on Sunday 27th April in Singapore!
Slightly lazy but feel need to post this in case it is too late... We will present this in the ICLR Workshop on Sparsity in LLMs (SLLM)! We found that the representation dimension can dominate the model performance in the structured pruning 🤯
#ICLR2025 #LLM #sparsity
April 12, 2025 at 10:47 AM
Reposted by Lisa Alazraki
I really enjoyed my MLST chat with Tim @neuripsconf.bsky.social about the research we've been doing on reasoning, robustness and human feedback. If you have an hour to spare and are interested in AI robustness, it may be worth a listen 🎧

Check it out at youtu.be/DL7qwmWWk88?...
March 19, 2025 at 3:11 PM
Reposted by Lisa Alazraki
ACL Rolling Review and the EMNLP PCs are seeking input on the current state of reviewing for *CL conferences. We would love to get your feedback on the current process and how it could be improved. To contribute your ideas and opinions, please follow this link! forms.office.com/r/P68uvwXYqfemn
Microsoft Forms
forms.office.com
February 27, 2025 at 5:00 PM
Do LLMs need rationales for learning from mistakes? 🤔
When LLMs learn from previous incorrect answers, they typically observe corrective feedback in the form of rationales explaining each mistake. In our new preprint, we find these rationales do not help, in fact they hurt performance!

🧵
February 13, 2025 at 3:38 PM
Reposted by Lisa Alazraki
Announcing the release of Common Corpus 2. The largest fully open corpus for pretraining comes back better than ever: 2 trillion tokens with document-level licensing, provenance and language information. huggingface.co/datasets/Ple...
PleIAs/common_corpus · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
February 11, 2025 at 1:18 PM
Reposted by Lisa Alazraki
I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux

Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!
November 23, 2024 at 7:54 PM
Reposted by Lisa Alazraki
We are hiring 6 lecturers at @imperialcollegeldn.bsky.social to work on AI, ML, graphics, vision, quantum and software engineering. This includes researchers working on LLMs, NLP, generative models and text applications. Deadline 6 Jan. @imperial-nlp.bsky.social www.imperial.ac.uk/jobs/search-...
Description
Please note that job descriptions are not exhaustive, and you may be asked to take on additional duties that align with the key responsibilities ment...
www.imperial.ac.uk
December 24, 2024 at 12:23 AM
Reposted by Lisa Alazraki
I'm excited to share a new paper: "Mastering Board Games by External and Internal Planning with Language Models"

storage.googleapis.com/deepmind-med...

(also soon to be up on Arxiv, once it's been processed there)
storage.googleapis.com
December 5, 2024 at 7:49 AM
Reposted by Lisa Alazraki
We’re looking for an intern to join our SmolLM team! If you’re excited about training LLMs and building high-quality datasets, we’d love to hear from you. 🤗

US: apply.workable.com/huggingface/...
EMEA: apply.workable.com/huggingface/...
ML Research Engineer Internship, SmolLMs pretraining and datasets - EMEA Remote - Hugging Face
Here at Hugging Face, we’re on a journey to advance good Machine Learning and make it more accessible. Along the way, we contribute to the development of technology for the better.We have built the fa...
apply.workable.com
November 27, 2024 at 10:20 AM
Spot-on thread by that we should all read (and hope our reviewers will read)
Do you know what rating you’ll give after reading the intro? Are your confidence scores 4 or higher? Do you not respond in rebuttal phases? Are you worried how it will look if your rating is the only 8 among 3’s? This thread is for you.
November 27, 2024 at 6:13 PM
Reposted by Lisa Alazraki
I thought to create a Starter Pack for people working on LLM Agents. Please feel free to self-refer as well.

go.bsky.app/LUrLWXe

#LLMAgents #LLMReasoning
November 20, 2024 at 2:08 PM
Reposted by Lisa Alazraki
I recently wrote down 68 short summaries of ML and NLP papers. The aim is to give a quick overview of the core research contributions, without all the packaging. www.marekrei.com/blog/68-summ...
November 16, 2024 at 12:34 AM
Hi Bluesky, would like to introduce myself 🙂

I am PhD-ing at Imperial College under @marekrei.bsky.social’s supervision. I am broadly interested in LLM/LVLM reasoning & planning 🤖 (here’s our latest work arxiv.org/abs/2411.04535)

Do reach out if you are interested in these (or related) topics!
Meta-Reasoning Improves Tool Use in Large Language Models
External tools help large language models (LLMs) succeed at tasks where they would otherwise typically fail. In existing frameworks, LLMs learn tool use either by in-context demonstrations or via full...
arxiv.org
November 20, 2024 at 11:26 AM
Reposted by Lisa Alazraki
Welcome to Bluesky to more of our NLP researchers at Imperial!! Looking forward to following everyone's work on here.

To follow us all click 'follow all' in the starter pack below

go.bsky.app/Bv5thAb
November 20, 2024 at 8:35 AM
Reposted by Lisa Alazraki
any kind folks from #nlp #mlsky would be willing to help with emergency reviews for ARR Oct 🥹? pls send me a msg 🙏
November 19, 2024 at 11:07 PM
Reposted by Lisa Alazraki
The NLP labs starter pack is here! go.bsky.app/LKGekew Let us know if you want to be added!
November 13, 2024 at 7:20 AM
Reposted by Lisa Alazraki
get it while it’s hot #ai

go.bsky.app/Pxcnfu6
November 18, 2024 at 1:36 PM
Thank you for the repost @imperial-nlp.bsky.social 🙂
Great paper Lisa! This new work on tool selection initially uses a fine-tuned LLM to find good candidate tools, before an LLM without fine-tuning reasons over these candidates, with significant performance improvements.
Lisa Alazraki, Marek Rei
Meta-Reasoning Improves Tool Use in Large Language Models
https://arxiv.org/abs/2411.04535
November 18, 2024 at 4:17 PM
Reposted by Lisa Alazraki
Lisa Alazraki, Marek Rei
Meta-Reasoning Improves Tool Use in Large Language Models
https://arxiv.org/abs/2411.04535
November 8, 2024 at 6:31 AM
Reposted by Lisa Alazraki
If you're an NLP researcher and haven't made it into either Starter Pack yet, please let me know! We're over halfway full at this point 😧

go.bsky.app/JgneRQk
November 18, 2024 at 7:45 AM
Reposted by Lisa Alazraki
A starter pack for #NLP #NLProc researchers! 🎉

go.bsky.app/SngwGeS
November 4, 2024 at 10:01 AM