https://lakshyaaagrawal.github.io
Maintainer of https://aka.ms/multilspy
"GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning" presents such elegant ideas by a collection of amazing researchers!
Here is a tldr of how it works:
"GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning" presents such elegant ideas by a collection of amazing researchers!
Here is a tldr of how it works:
They used Qwen3-8B (which was not trained for math, coding, agency, etc.) and show that GEPA performed better than RL rollouts
paper: arxiv.org/abs/2507.19457
github: github.com/gepa-ai/gepa
DSPy docs: dspy.ai/api/optimize...
They used Qwen3-8B (which was not trained for math, coding, agency, etc.) and show that GEPA performed better than RL rollouts
paper: arxiv.org/abs/2507.19457
github: github.com/gepa-ai/gepa
DSPy docs: dspy.ai/api/optimize...
I'm learning to use DSPy with GEPA (Genetic-Pareto) prompt optimization. In GEPA a larger "teacher" LLM adjusts the prompt for a smaller "student" LM to perform a specific task as well as possible. The teacher will try many different prompts and evaluate the […]
I'm learning to use DSPy with GEPA (Genetic-Pareto) prompt optimization. In GEPA a larger "teacher" LLM adjusts the prompt for a smaller "student" LM to perform a specific task as well as possible. The teacher will try many different prompts and evaluate the […]
arxiv.org/abs/2507.19457
arxiv.org/abs/2507.19457
Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems,
Given any AI system containing one or more LLM prompts, GEPA samples system-level trajectories (e.g., reasoning, tool calls, and tool outputs) and reflects on them in natural language to diagnose problems,
Links:
Paper on arXiv: https://arxiv.org/abs/2507.19457 ..
(6/7)
Links:
Paper on arXiv: https://arxiv.org/abs/2507.19457 ..
(6/7)
What’s cool here is the shift from treating AI tuning as a blind search for a higher score to a reflective process that leverages the AI’s native strength: language. By evolving prompts through thoughtful reflections, GEPA unlocks smarter, faster learning that could..
(5/7)
What’s cool here is the shift from treating AI tuning as a blind search for a higher score to a reflective process that leverages the AI’s native strength: language. By evolving prompts through thoughtful reflections, GEPA unlocks smarter, faster learning that could..
(5/7)
(4/7)
(4/7)
GEPA treats AI prompt tuning like a conversation with itself, iterating through generations of prompts that learn from detailed feedback written in words, not just numbers. This lets it learn much more efficiently—up to 35 times..
(3/7)
GEPA treats AI prompt tuning like a conversation with itself, iterating through generations of prompts that learn from detailed feedback written in words, not just numbers. This lets it learn much more efficiently—up to 35 times..
(3/7)
(2/7)
(2/7)
Most AI training feels like trial and error in the dark—reinforcement learning tweaks models by chasing a number, often needing tens of thousands of tries to improve. But what if the AI could actually *talk to itself* about..
(1/7)
Most AI training feels like trial and error in the dark—reinforcement learning tweaks models by chasing a number, often needing tens of thousands of tries to improve. But what if the AI could actually *talk to itself* about..
(1/7)
Origin | Interest | Match
Interest | Match | Feed
Interest | Match | Feed
Interest | Match | Feed
Interest | Match | Feed
venturebeat.com/ai/the-usd10...
venturebeat.com/ai/the-usd10...
📜 MIT lic
🔗 Link in first 💬⤵️
Repost 🔁 #AI #LLM #RAG #PromptEngineering #ContextEngineering
📜 MIT lic
🔗 Link in first 💬⤵️
Repost 🔁 #AI #LLM #RAG #PromptEngineering #ContextEngineering