Syeda Nahida Akter
reasyaay.bsky.social
Syeda Nahida Akter
@reasyaay.bsky.social
PhD-ing @ LTI, CMU; Intern @ NVIDIA. Doing Reasoning with Gen AI!
Huge thanks to our incredible collaborators: @shrimai.bsky.social ,
Matvei Novikov, Seungju Han, Ying Lin, Evelina Bakhturina, Eric Nyberg, @yejinchoinka.bsky.social, Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro 🙌

We’d love to hear your thoughts—feedback and ideas are always welcome! 💬
May 1, 2025 at 5:42 PM
Find out more about our data and paper 👇
📂 Dataset on HuggingFace:
huggingface.co/datasets/nvi...
📝 Blog:
research.nvidia.com/labs/adlr/Ne...
🔗Paper: arxiv.org/abs/2504.13941
nvidia/Nemotron-CrossThink · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
May 1, 2025 at 5:42 PM
🧠 Selective difficulty > data volume
✅Filtering out easy samples—i.e., those solved by a 7B model—leads to +2.15% accuracy gain when training a 32B model.
✅Harder questions push the model to learn deeper reasoning patterns.
May 1, 2025 at 5:42 PM
💡 Better formatting → Stronger reasoning

➣ Open-ended questions boost accuracy (+1.21%) by forcing models to reason, not guess!
➣ Short-form answers—reduce ambiguity & avoid noisy rewards—boosts accuracy by +1.20%!

👉 Thoughtful templates = clearer supervision, better RL
May 1, 2025 at 5:42 PM
🔥Nemotron-CrossThink achieves 28% token efficiency by adapting to task needs

➣ concise on general reasoning (229 tokens on MMLU) and
➣ detailed on math (+62% token increase)

Unlike math-only models, which barely adapt (12–14% token increase).
May 1, 2025 at 5:42 PM
🎯 Why it matters:
Nemotron-CrossThink achieves:
📈 +30.1% on MATH-500, +15.1% on AGIEVAL, +12.8% on MMLU-Pro compared to base LLM
📉 28% fewer tokens per correct answer
🏆 Outperforms math-only blends by training on broader, more diverse reasoning data
May 1, 2025 at 5:42 PM
How does Nemotron-CrossThink work?
➣Curate QA pairs from Common Crawl + open datasets
➣Apply structured templates: multiple-choice + open-ended
➣Filter out unverifiable / ambiguous samples
➣Train LLM with GRPO—a scalable RL algorithm
May 1, 2025 at 5:42 PM
Most RL methods stick to math because rewards are easy to define.
But general purpose reasoning?
❌ No clean answers
❌ No fixed rules
Nemotron-CrossThink addresses these by:
✅ Design verifiable rewards for diverse tasks
✅ Blend structured data from STEM, law, humanities, & more
May 1, 2025 at 5:42 PM