‣ Continued pre-training on Meta's internal docs and wikis
‣ Supervised fine-tuning on past incident investigations
‣ Training data mimicked real-world constraints (2-20 potential changes per incident)
Read it in full 👉 www.tryparity.com/blog/how-met...
‣ Continued pre-training on Meta's internal docs and wikis
‣ Supervised fine-tuning on past incident investigations
‣ Training data mimicked real-world constraints (2-20 potential changes per incident)
Read it in full 👉 www.tryparity.com/blog/how-met...
🔄 Two-step approach:
‣ Heuristics (code ownership, directory structure, runtime graphs) reduce thousands of potential changes to a manageable set
‣ Fine-tuned Llama 2 7B ranks the most likely culprits
🔄 Two-step approach:
‣ Heuristics (code ownership, directory structure, runtime graphs) reduce thousands of potential changes to a manageable set
‣ Fine-tuned Llama 2 7B ranks the most likely culprits
➡️ When there's an issue in prod, engineers dive into recent code changes to find the offending commit. At Meta (thousands of daily changes), this is like finding a needle in a haystack.
💡 So the LLM-based suggestion can cut incident resolution time from hours to seconds!
➡️ When there's an issue in prod, engineers dive into recent code changes to find the offending commit. At Meta (thousands of daily changes), this is like finding a needle in a haystack.
💡 So the LLM-based suggestion can cut incident resolution time from hours to seconds!