Julia Kreutzer
juliakreutzer.bsky.social
Julia Kreutzer
@juliakreutzer.bsky.social
NLP & ML research @cohereforai.bsky.social 🇨🇦
Thank you @rapha.dev 😊 hope we can establish going a little more into depth rather than just focusing on breadth (massive multilinguality) with evals.
April 24, 2025 at 12:08 AM
🎯In order to keep advancing mLLM models, we need to advance our evaluation methods.
We need meta-evaluation research to think beyond one-fits-all automatic evaluation and develop richer assessments in human evaluation, and iterate to adapt them to advances in capabilities. 🔄
April 17, 2025 at 10:56 AM
🤔Yes, none of these principles are novel or the techniques particularly sophisticated.
Despite their effectiveness, none of them are standard practice.
✔️We’ve compiled a checklist to help incorporate them in model evaluations.
April 17, 2025 at 10:56 AM
(5) Advancing reproducibility through transparency 🪟
Current mLLM evaluations are near impossible to reproduce, due to intransparency of evaluation configurations (incl. task formulation as in the example below). We argue for open evaluation releases that include model outputs and their scores.
April 17, 2025 at 10:56 AM
(4) Conducting richer analyses 🔬
Aggregate benchmark metrics do not provide insights into what differentiates the outputs of two models - yet this is often the first step in human evaluation. For example, we can group evaluation prompts by length or category.
April 17, 2025 at 10:56 AM
(3) Aggregating responsibly 🏗️
How we aggregate results across tasks and languages informs the interpretation of model comparisons. Uniform weighting is not necessarily fair due to differences in training distribution (e.g. language or task support).
April 17, 2025 at 10:56 AM