Oral at regulatableml.github.io & Poster at redteaming-gen-ai.github.io
tldr: We benchmarked LLMs' literal/non-literal copying of copyrighted content—risks found even in 8B models.
Detais: www.arxiv.org/abs/2407.07087
Oral at regulatableml.github.io & Poster at redteaming-gen-ai.github.io
tldr: We benchmarked LLMs' literal/non-literal copying of copyrighted content—risks found even in 8B models.
Detais: www.arxiv.org/abs/2407.07087
TLDR: We demonstrated scaling retrieval corpora of Retrieval-Augmented LMs to 1.4T helps & achieves more compute-optimal scaling
Details: retrievalscaling.github.io
TLDR: We demonstrated scaling retrieval corpora of Retrieval-Augmented LMs to 1.4T helps & achieves more compute-optimal scaling
Details: retrievalscaling.github.io
Retrieval-Augmented LMs tackle critical challenges like:
1️⃣ Unreliable LMs in expert domains
2️⃣ Information access inequity across languages
I launched OpenScholar for scientific synthesis—20k+ demo requests in week 1! Details: allenai.org/blog/opensch...
Retrieval-Augmented LMs tackle critical challenges like:
1️⃣ Unreliable LMs in expert domains
2️⃣ Information access inequity across languages
I launched OpenScholar for scientific synthesis—20k+ demo requests in week 1! Details: allenai.org/blog/opensch...
Retrieval-augmented LMs need more than off-the-shelf models. I developed advanced training/inference algorithms & architectures, including Self-RAG (ICLR 2024 Oral; NeurIPS Workshop Hon. Mention) for adaptive retrieval & self-critique.
Learn more:
selfrag.github.io
Retrieval-augmented LMs need more than off-the-shelf models. I developed advanced training/inference algorithms & architectures, including Self-RAG (ICLR 2024 Oral; NeurIPS Workshop Hon. Mention) for adaptive retrieval & self-critique.
Learn more:
selfrag.github.io
My work showed that scaling LLMs alone doesn’t solve issues like hallucinations or obsolete knowledge and is compute suboptimal, and Retrieval-Augmented LMs address these challenges. See our ACL 2023 Best Video Award paper:
aclanthology.org/2023.acl-lon...
My work showed that scaling LLMs alone doesn’t solve issues like hallucinations or obsolete knowledge and is compute suboptimal, and Retrieval-Augmented LMs address these challenges. See our ACL 2023 Best Video Award paper:
aclanthology.org/2023.acl-lon...
OpenScholar is the result of a collaborative effort UW, Ai2 and many others!
Huge thanks to our incredible team including experts from CS, Bio, and physics, for making this possible!
We’d love your feedback! Reply or email us with questions, ideas, or use cases✨
OpenScholar is the result of a collaborative effort UW, Ai2 and many others!
Huge thanks to our incredible team including experts from CS, Bio, and physics, for making this possible!
We’d love your feedback! Reply or email us with questions, ideas, or use cases✨
Try it out: openscholar.allen.ai
Read more: allenai.org/blog/opensch... – we discuss more details as well as limitations of OpenScholar, based on our beta testing with CS researchers!
Code & data: github.com/AkariAsai/Op...
Paper: openscholar.allen.ai/paper
Try it out: openscholar.allen.ai
Read more: allenai.org/blog/opensch... – we discuss more details as well as limitations of OpenScholar, based on our beta testing with CS researchers!
Code & data: github.com/AkariAsai/Op...
Paper: openscholar.allen.ai/paper
We're just getting started with OpenScholar! 🚀
Expanding domains: Support for non-CS fields is coming soon. Public API: Full-text search over 45M+ papers will be available shortly.
Try the OpenScholar demo and share your feedback!
openscholar.allen.ai
We're just getting started with OpenScholar! 🚀
Expanding domains: Support for non-CS fields is coming soon. Public API: Full-text search over 45M+ papers will be available shortly.
Try the OpenScholar demo and share your feedback!
openscholar.allen.ai
📂 OpenScholar Datastore (45M+ papers up to 2024/10): huggingface.co/datasets/Ope...
📊 ScholarQABench: github.com/AkariAsai/Sc...
👩🔬 Human evaluation interface: github.com/AkariAsai/Op...
📂 OpenScholar Datastore (45M+ papers up to 2024/10): huggingface.co/datasets/Ope...
📊 ScholarQABench: github.com/AkariAsai/Sc...
👩🔬 Human evaluation interface: github.com/AkariAsai/Op...
Prior work in this area has relied on proprietary LMs and/or released only a subset of datastore
We're releasing
Demo: openscholar.allen.ai
🔓 Code & model checkpoints:
github.com/AkariAsai/Op...
huggingface.co/collections/...
Prior work in this area has relied on proprietary LMs and/or released only a subset of datastore
We're releasing
Demo: openscholar.allen.ai
🔓 Code & model checkpoints:
github.com/AkariAsai/Op...
huggingface.co/collections/...
We further conduct expert evaluations with scientists across CS, Bio and Physics, comparing OS against expert answers.
Scientists preferred OpenScholar-8B outputs compared to human-written answers in majority of the times, thanks to its coverage
We further conduct expert evaluations with scientists across CS, Bio and Physics, comparing OS against expert answers.
Scientists preferred OpenScholar-8B outputs compared to human-written answers in majority of the times, thanks to its coverage
So how good OpenScholar?
On ScholarBench, OpenScholar-8B surpassed GPT-4o, concurrent PaperQA2, and other models in factuality & citation accuracy despite being many times cheaper!
So how good OpenScholar?
On ScholarBench, OpenScholar-8B surpassed GPT-4o, concurrent PaperQA2, and other models in factuality & citation accuracy despite being many times cheaper!
A benchmark for evaluating scientific language models on real-world, open-ended questions requiring synthesis across multiple papers. 🌟
📚 7 datasets across four scientific disciplines
🧑🔬 2,000+ expert-annotated question and 200 answers
📊 Automated metrics
A benchmark for evaluating scientific language models on real-world, open-ended questions requiring synthesis across multiple papers. 🌟
📚 7 datasets across four scientific disciplines
🧑🔬 2,000+ expert-annotated question and 200 answers
📊 Automated metrics