Alexander Berger
albrgr.bsky.social
Alexander Berger
@albrgr.bsky.social
CEO of Open Philanthropy
More on our big news today:
bsky.app/profile/alb...
November 18, 2025 at 4:17 PM
Reducing lead poisoning scored well on three criteria we use as a proxy for cost-effectiveness:

- Importance: 1m deaths per year, >⅓ of children worldwide affected
- Neglectedness: <1% the funding of malaria or TB
- Tractability: Proven ways of reducing lead in spices & paint
November 18, 2025 at 3:07 PM
In the coming years, we’ll continue to grow our work with Good Ventures. But many others are interested in giving at scale and not on track. We often hear people cite a lack of outstanding, shovel-ready opportunities.
November 18, 2025 at 3:07 PM
Over the last decade, we’ve worked closely with our founding partner Good Ventures to give away over $4b. I am very proud of the results:
November 18, 2025 at 3:07 PM
Some news: Open Philanthropy is now Coefficient Giving! Our mission is unchanged but the new name reflects our growing work with other donors to multiply the impact of their giving.

🧵 on our work to make philanthropy a more efficient "market" and plans going forward:
November 18, 2025 at 3:07 PM
I really like this R21/RTS,S malaria vaccine tracker from @1daysooner.bsky.social

1dayafrica.org/r21-campaign
November 11, 2025 at 11:54 PM
New @devex.com article, ft. @justsand.bsky.social, on our our $40m fund to accelerate economic growth in low- and middle-income countries, including a great explanation of hits-based vs. GiveWell-style philanthropic giving

www.devex.com/news/as-aid...
October 21, 2025 at 10:48 PM
Great new article in @devex.com featuring Tom Hird, who works on our Lead Exposure Action Fund. Progress is possible on reducing lead poisoning!

Full article here: www.devex.com/news/sponso...
October 20, 2025 at 8:01 PM
Good article from Bloomberg on how genetic selection causes lots of suffering for broiler chickens, featuring work from several Open Phil grantees on better broiler welfare standards.
www.bloomberg.com/news/featur...
October 10, 2025 at 5:11 PM
We've been funding @cayimby.bsky.social, which championed this work, since its early days, and it's been fun to look back at how far they've come. After a prior incarnation of SB 79 failed in 2018, we renewed our funding and I wrote in our renewal template:
October 10, 2025 at 5:08 PM
CA Governor Newsom just signed SB 79, an important bill to make homes easier to build near transit, and arguably the biggest YIMBY win to date.

Some reflections on the bill & the history of how we got here 🧵
October 10, 2025 at 5:08 PM
If you are a funder interested in getting involved, get in touch - we would love to be a resource! We're increasingly working with other donors and we are eager to help donors find highly cost-effective opportunities.
October 2, 2025 at 4:05 PM
More resources are needed across these different theories of change.

Other reasons right now is leveraged: AI advancements have created better research tools, attracted researchers to the field, and increased policy opportunities.
October 2, 2025 at 4:05 PM
On building the field's capacity: scholarships, fellowships and educational initiatives like MATS and BlueDot Impact have built out impressive talent pipelines. MATS reports 80% of alumni are working on AI safety!
October 2, 2025 at 4:05 PM
On technical and policy safeguards: Redwood Research's work on loss-of-control scenarios, Theorem's work on developing formal verification methods, and several think tanks' work on technical AI governance show how progress is possible.
October 2, 2025 at 4:05 PM
The rest of the post describes experience from our ~10y in this space which show philanthropy can move the needle.

On visibility into frontier AI R&D: we've supported benchmarks like Percy Liang's CyBench, public data work from @epochai.bsky.social, and research from @csetgeorgetown.bsky.social
October 2, 2025 at 4:05 PM
There are four key reasons other funders are needed:

(1) There are highly cost-effective grants not in Good Ventures' scope
(2) AI policy needs a diverse funding base
(3) Other orgs can make bets we're missing
(4) Generally, AI safety and security is still underfunded!
October 2, 2025 at 4:05 PM
To begin: AI is rapidly advancing, which gives funders a narrow window to make a leveraged difference.
October 2, 2025 at 4:05 PM
People sometimes assume that Open Phil “has it covered” on philanthropy for AI safety & security. That’s not right: some great opportunities really need other funders. Liz Givens and I make the case for why (and why now) in the final post of our series.
www.openphilanthropy.org/research/ai...
October 2, 2025 at 4:05 PM
The third is capacity: we aim to grow and strengthen the fields of research and practice responding to these challenges. This includes support for fellowship programs, career development, conferences, and educational initiatives.
October 1, 2025 at 5:02 PM
The second is designing and implementing technological and policy safeguards. This includes both technical AI safety & security and a range of AI governance work:
October 1, 2025 at 5:02 PM
3 prongs to our grantmaking approach in practice.

The first is increasing visibility into cutting-edge AI R&D, with the goal of better understanding AI’s capabilities and risks. This includes supporting AI model evals, threat modeling, and building public understanding.
October 1, 2025 at 5:02 PM
Today, we've scaled our work on AI safety and security significantly. Our work on risks focuses on worst-cases, but we aim to strike a number of important balances:
October 1, 2025 at 5:02 PM
Ten years later, the landscape has changed drastically: AI is much more advanced and has risen hugely in geopolitical importance. There is greater empirical evidence and expert agreement about the catastrophic risks it could pose.
October 1, 2025 at 5:01 PM
The strategic landscape was very unclear when we first entered the field. As a result, we mostly funded early-stage research and field-building efforts to increase the number of people taking these questions seriously.
October 1, 2025 at 5:01 PM