Awa Dieng
banner
adoubleva.bsky.social
Awa Dieng
@adoubleva.bsky.social
Researcher at Google DeepMind | organizer afciworkshop.org | research interests: causality, algorithmic fairness
That’s a wrap for #AFME2024!!! 🎉 Thank you to all the authors, attendees, roundtable leads and speakers for the great presentations and insightful discussions!
December 15, 2024 at 3:04 AM
Our amazing panellists discussed how to define fairness and challenges of evaluation, ethical considerations and interdisciplinary collaboration for addressing different dimensions of fairness!

@jessicaschrouff.bsky.social @sanmikoyejo.bsky.social @sethlazar.org Hoda Heidari
December 15, 2024 at 3:03 AM
For the final contributed talk, Ben Laufer discussed the fundamental limits in the search for less discriminatory algorithms!!
December 14, 2024 at 11:20 PM
Another great contributed talk!! Natalie Mackraz discussed her work on evaluating gender bias transfer between pre-trained and prompt-adapted LLMs
December 14, 2024 at 11:13 PM
In the third contributed talk, Prakhar Ganesh presented his paper comparing g bias mitigation algorithms in ML!
December 14, 2024 at 11:07 PM
To kick off the afternoon session, we have a great talk from @angelinawang.bsky.social on the need for group difference awareness and a new suite of benchmark for assessing this metric in LLMs!!
December 14, 2024 at 10:45 PM
Ending the morning session with great discussions at the roundtables 🎊 See you after lunch
December 14, 2024 at 9:26 PM
In the last invited talk of the morning, we have @sethlazar.org giving an insightful talk on evaluating the ethical competence of LLMs!!
December 14, 2024 at 9:25 PM
The second contributed talk at #AFME2024.

To Eun Kim discussed his paper “Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation”
December 14, 2024 at 7:34 PM
Our first contributed talk is by Alex Tamkin presented his work on Evaluating and Mitigating Discrimination in Language Model Decisions!
December 14, 2024 at 7:33 PM
Another great talk of the day by @krvarshney.bsky.social discussing his work on building harm detectors and guardian models for large generative models! He also addressed the need to broaden the dimensions of harms in different use cases
December 14, 2024 at 6:55 PM
Great first talk by Hoda, giving an overview of fairness metrics in traditional fairness and generative models. She discussed the desiderata for a Good Measurement and steps towards building contextually aware fairness metrics in LLMs!
December 14, 2024 at 6:50 PM
Off to a great start with the opening remarks by @nandofioretto.bsky.social on why this year’s topic.
December 14, 2024 at 5:09 PM
It's time for our #NeurIPS2024 Algorithmic Fairness Workshop #AFME2024 🥳!

Join us TODAY for a full day of discussions on fairness metrics & evaluation. Schedule: afciworkshop.org/schedule

⏰ We start at 9 am with the opening remarks, followed by a keynote by on Fairness Measurement by Hoda Heidari
December 14, 2024 at 4:30 PM
📆AFME workshop: Sat, Dec 14 in room 111-112

My favorite part of the workshop 🥳

💬 Join our amazing leads* at the roundtables for insightful discussions on Fairness/Bias Metrics and Evaluation.

* @angelinawang.bsky.social, Candace Ross (FAIR), Tom Hartvigsen (UofVirginia)
December 11, 2024 at 6:01 PM
📆 AFME workshop: Sat, Dec 14 in room 111-112

Join our expert panellists* for a timely discussion on “Rethinking fairness in the era of large language models”!!

* @jessicaschrouff.bsky.social, @sethlazar.org, Sanmi Koyejo, Hoda Heidari
December 10, 2024 at 4:51 AM
Professor Seth Lazar @sethlazar.org from ANU will discuss challenges in evaluating Ethical Competence in LLMs
December 8, 2024 at 4:17 AM
Professor Sanmi Koyejo and Dr Angelina Wang will present a new fairness metric for measuring discrimination in LLMs
December 8, 2024 at 4:17 AM
Dr Kush Varshney from IBM research will discuss ongoing work on detecting harms in deployed LLMs and associated challenges
December 8, 2024 at 4:17 AM
Professor Hoda Heidari from CMU will give an overview of the algorithmic fairness field and discuss reconciling existing methods with bias measurement methods in LLMs
December 8, 2024 at 4:17 AM
In The Nteasee Study, we conducted surveys of 672 general population participants across 5 countries–Ghana, Rwanda, Kenya, Nigeria and South Africa–as well as in-depth interviews of experts working in AI/ML, health, policy, and equity. arxiv.org/abs/2409.12197 (2/11)
November 17, 2024 at 9:57 PM
We (+ @dr-nyamewaa.bsky.social) are excited to present two recent works detailing the landscape of AI in health in Africa from a fairness and equity angle:

The Nteasee Study arxiv.org/abs/2409.12197

and

The Case for Globalizing Fairness dl.acm.org/doi/10.1145/... (1/11)
November 17, 2024 at 9:57 PM