| Open Philanthropy | |
|---|---|
| Type | LLC; affiliated 501(c)(4) Open Philanthropy Action Fund |
| Industry | Philanthropy, grantmaking |
| Founded | 2014 (as Open Philanthropy Project, a collaboration between GiveWell and Good Ventures); independent since June 1, 2017 |
| Renamed | November 18, 2025, to Coefficient Giving |
| Founders | Holden Karnofsky, Dustin Moskovitz, Cari Tuna |
| Headquarters | San Francisco, California, United States |
| Key people | Alexander Berger (CEO since 2021) Cari Tuna (Board Chair) Dustin Moskovitz (Board Member) Divesh Makan (Board Member) Holden Karnofsky (former Co-CEO, departed 2025) Luke Muehlhauser (Senior Program Officer, AI Governance and Policy) |
| Primary funder | Good Ventures (foundation of Cari Tuna and Dustin Moskovitz) |
| Cumulative grants | More than $4 billion directed as of June 2025 |
| Annual grantmaking (2024) | Over $650 million |
| Focus areas | AI safety, global health and well-being, farm animal welfare, biosecurity and pandemic preparedness, scientific research, forecasting, lead exposure, abundance and growth |
| Affiliations | Effective altruism movement |
| Website | openphilanthropy.org (now coefficientgiving.org) |
Open Philanthropy is an American grantmaking organization headquartered in San Francisco, California, best known as the largest single private funder of artificial intelligence safety research in the world and as the operating arm that directs the philanthropic spending of Cari Tuna and Dustin Moskovitz, the co-founder of Facebook (now Meta Platforms) and the workplace software company Asana.[1] It was created in 2014 as the Open Philanthropy Project, an internal partnership between the charity evaluator GiveWell and the family foundation Good Ventures, and it became a fully independent organization on June 1, 2017. In November 2025 it rebranded to Coefficient Giving to signal its expansion into operating multi-donor pooled funds, although the legacy name Open Philanthropy continues to dominate references in academic, journalistic, and effective-altruist sources.[2][3]
As of June 2025, Open Philanthropy had directed more than $4 billion in lifetime grants across its focus areas, which span global health and well-being (malaria interventions, catch-up vaccinations, direct cash transfers), farm animal welfare (cage-free campaigns), biosecurity and pandemic preparedness, scientific research, and the cluster of work on potential risks from advanced AI that has come to define its public profile.[4] In 2024 alone, the organization directed more than $650 million in grants. Open Philanthropy is the most influential institutional vehicle of the effective altruism movement.[5] Its footprint in AI safety is unusual: by 2023 AI safety had grown to roughly 12 percent of cumulative giving and Open Philanthropy was responsible for the majority of all AI-safety dollars deployed worldwide, including foundational support for Anthropic, the Center for AI Safety, the Machine Intelligence Research Institute (MIRI), the Alignment Research Center (ARC), Model Evaluation and Threats Research (METR), Redwood Research, the Center for Human-Compatible AI (CHAI), and the Berkeley Existential Risk Initiative.[6][7]
The organization's lineage begins with GiveWell, a charity-evaluation nonprofit founded in 2007 by hedge-fund analysts Holden Karnofsky and Elie Hassenfeld, who were frustrated with how little rigorous evidence existed about which charities actually saved lives most cost-effectively. GiveWell quickly built a reputation for unusually thorough public research on a small list of recommended global health and poverty charities such as the Against Malaria Foundation and GiveDirectly.[2]
In 2011, Cari Tuna, a former Wall Street Journal reporter, and her husband Dustin Moskovitz, who had co-founded Facebook with Mark Zuckerberg in 2004 and Asana in 2008, established the family foundation Good Ventures. Tuna left journalism to lead the foundation. The couple signed the Giving Pledge in 2014, becoming the youngest couple at the time to commit to giving away most of their wealth, and made clear they intended to spend the bulk of the fortune within their lifetimes.[8] In 2012, Good Ventures and GiveWell launched a joint exploratory project called GiveWell Labs to figure out what an unconstrained donor with hundreds of millions or billions of dollars should fund, even if the answers required moving outside the universe of evidence-backed interventions GiveWell had focused on.[9]
In 2014, GiveWell Labs was renamed the Open Philanthropy Project. The new name reflected openness to a wide range of cause areas, including ones less amenable to randomized controlled trials than GiveWell's malaria-net work, and a commitment to publishing the reasoning behind grants in unusual depth.[10] The framework leaned on the importance, neglectedness, and tractability heuristic associated with effective altruism, paired with a hits-based philosophy where most grants might fail but a rare success could justify the entire portfolio. This led the organization into U.S. criminal justice reform, immigration policy, macroeconomic stabilization policy, and the early work on potential risks from transformative AI.[2] On June 1, 2017, the partnership formally separated. GiveWell sold the Open Philanthropy Project's assets into a new entity, Open Philanthropy LLC, with Karnofsky and Cari Tuna initially serving as co-CEOs.[11]
In 2021, Open Philanthropy formalized a co-CEO structure with Karnofsky leading the Global Catastrophic Risks portfolio (AI safety and biosecurity) and Alexander Berger, a longtime senior staffer who had joined GiveWell Labs at its founding, leading the Global Health and Wellbeing portfolio.[12] The structure reflected a longstanding observation that the two halves of the portfolio operated on radically different evidentiary standards and timescales: Global Health and Wellbeing grants could be benchmarked against GiveDirectly's cash transfers with concrete cost-per-life-saved estimates, while Global Catastrophic Risks grants often funded research whose payoffs would only be visible decades into the future, if ever.[13]
In March 2023, Holden Karnofsky began a leave of absence from Open Philanthropy to focus directly on AI safety. He had spent the prior years writing the Cold Takes blog about transformative AI, longtermism, and what he called the "most important century" hypothesis, and he had become convinced that the rate of progress in large language models merited a personal pivot.[14] He helped run Open Philanthropy through July 2023 and then transitioned to a Visiting Scholar role at the Carnegie Endowment for International Peace.
Karnofsky's departure also reflected a longstanding conflict-of-interest concern: in August 2017 he had married Daniela Amodei, who in 2021 co-founded Anthropic with her brother Dario Amodei. As Open Philanthropy's exposure to Anthropic grew, both through direct grants and through Moskovitz's personal investment, the appearance of conflict became difficult to manage from a co-CEO seat.[15] In late 2025, Karnofsky started a new role at Anthropic, working on Claude's character, constitution, and responsible scaling policies.[16] With Karnofsky's departure, Alexander Berger became sole CEO. Luke Muehlhauser, who had joined in 2015 after serving as executive director of MIRI, continued to lead AI Governance and Policy grantmaking, with the AIGP team alone aiming to deploy more than $100 million per year by the mid-2020s.[17]
On November 18, 2025, Open Philanthropy announced that it was rebranding as Coefficient Giving. A coefficient multiplies the value of whatever it is paired with, evoking the goal of amplifying donor impact, and the syllables "co" and "efficient" capture the dual focus on collaboration with other donors and cost-effectiveness.[3] The rebrand came with a substantive strategic shift: programs previously operated as internal grantmaking divisions were converted into named funds other philanthropists could join. The Lead Exposure Action Fund ($125 million, launched 2024 with the Gates Foundation among others) and the Abundance and Growth Fund ($120 million, launched 2025 with Stripe co-founder Patrick Collison) had already operated this way, but under the new structure all focus areas were rebuilt as funds that could accept outside capital.[18][19] In 2024, Open Philanthropy directed more than $100 million from donors other than Good Ventures, and in 2025 that figure more than doubled.[3]
Good Ventures remains by far Open Philanthropy's largest single funder. Tuna and Moskovitz have pledged to give away the great majority of their wealth within their lifetimes rather than build a permanent endowment. As Moskovitz's Asana and Facebook holdings appreciated, Good Ventures was repeatedly recapitalized; in August 2024 alone, Moskovitz transferred an additional $1.9 billion in Asana stock and other assets into the foundation, one of the largest single annual additions to a U.S. private foundation in history.[20] By late 2025, Tuna and Moskovitz's combined philanthropic commitment was widely estimated at more than $20 billion, with about $5 billion already distributed.[21] As progress in AI accelerated and timelines to transformative AI shortened in the view of leadership, the organization began publicly discussing whether to advise Good Ventures to spend down significantly faster, and the annual grantmaking budget more than doubled between 2022 and 2024.
Open Philanthropy's grantmaking is organized into two umbrella portfolios: Global Health and Well-Being, and Global Catastrophic Risks. Within those portfolios, the organization operates a set of named funds and program areas. The table below summarizes the focus areas and approximate cumulative giving as of mid-2025.
| Focus area | Portfolio | Approximate cumulative giving (lifetime) | Representative grantees |
|---|---|---|---|
| Global health and development | Global Health and Wellbeing | More than $1.6 billion | GiveWell top charities, GiveDirectly, Against Malaria Foundation, New Incentives, Helen Keller International |
| Scientific research | Global Health and Wellbeing | Hundreds of millions | Target Malaria, R21 malaria vaccine, Bill and Melinda Gates Medical Research Institute |
| Farm animal welfare | Global Health and Wellbeing | More than $300 million | The Humane League, Mercy for Animals, Compassion in World Farming, Open Wing Alliance |
| Lead exposure (Lead Exposure Action Fund) | Global Health and Wellbeing | $125 million pooled fund | LEEP, Pure Earth, national environmental agencies |
| Abundance and growth | Global Health and Wellbeing | $120 million pooled fund | Institute for Progress, Center for Open Science, growth-oriented research |
| Forecasting | Global Health and Wellbeing | Tens of millions | Metaculus, Good Judgment, Forecasting Research Institute |
| AI safety, alignment, and governance | Global Catastrophic Risks | More than $500 million | Anthropic, Center for AI Safety, MIRI, ARC, METR, Redwood Research, CHAI, BERI |
| Biosecurity and pandemic preparedness | Global Catastrophic Risks | Hundreds of millions | Blueprint Biosecurity, Brown Pandemic Center, Mirror Biology Dialogues Fund, Johns Hopkins Center for Health Security |
The Global Health and Wellbeing portfolio uses GiveDirectly's unconditional cash transfers as a baseline, with new programs typically required to clear a roughly 1,000-times-cash bar in expected welfare-adjusted impact per dollar.[13]
Open Philanthropy's largest sustained line of giving has been to GiveWell-recommended top charities, which absorb roughly half of Global Health and Wellbeing dollars. The interventions are concentrated in malaria control (Against Malaria Foundation bednets, indoor residual spraying, seasonal chemoprevention), childhood vaccinations (with New Incentives running cash incentive programs in Nigeria to keep families on schedule for catch-up vaccinations), vitamin A supplementation (Helen Keller International), and direct cash transfers (GiveDirectly).[22] Open Philanthropy has also funded R21, a next-generation malaria vaccine developed at Oxford and now being scaled to protect millions of children annually, and Target Malaria, a $17.5 million multi-year grant for gene-drive technology to suppress mosquito populations in sub-Saharan Africa.[22]
Open Philanthropy is widely credited with creating the modern farm-animal-welfare funding field. Its central bet has been on corporate cage-free campaigns, in which coalitions of advocacy groups pressure major food companies to phase out battery cages for egg-laying hens. As of 2025, more than 3,000 companies globally had signed cage-free pledges, including most of the largest U.S. and European retailers, fast-food chains, and food-service companies, and an increasing share of large Asian-Pacific food firms. Open Philanthropy estimates that fully implemented pledges will spare roughly 500 million animals per year from life in barren cages.[23] The Humane League has been the largest grantee in this area, receiving more than $60 million across multiple grants since 2016, including a 2024 general-support grant of $8.4 million that supported the Open Wing Alliance.[24]
Open Philanthropy began funding biosecurity and pandemic preparedness in 2015, roughly five years before the COVID-19 pandemic gave the field broader visibility. The work focuses on a few defensible bets: affordable respiratory protection for healthcare workers in a future pandemic, defensive technologies such as metagenomic sequencing and Far-UVC light, stronger international norms on biological weapons and dual-use research oversight, controls on DNA synthesis screening, and basic research on countermeasures.[25] Major grantees include Blueprint Biosecurity, the Brown Pandemic Center, the Mirror Biology Dialogues Fund, and the Health Security Scholars Program at Johns Hopkins.[26]
The most influential and most controversial part of Open Philanthropy's work is the cluster of grants it has made to support AI safety and AI alignment research, AI governance and policy, and field-building. Holden Karnofsky began studying potential risks from advanced AI seriously in the mid-2010s, and the program has grown roughly an order of magnitude every few years since. By mid-2023, Open Philanthropy had given approximately $336 million cumulatively to AI safety, with roughly $46 million in 2023 alone, making it the largest funder of the field worldwide. By 2025, cumulative AI-safety giving had passed an estimated $500 million.[6][7] Its published AI safety strategy is built on three pillars: improving visibility into AI capabilities through evaluations and forecasting; developing technical and policy safeguards against catastrophic risks; and building the talent pipeline and institutional capacity the field urgently needs.[27]
| Organization | Approximate cumulative Open Philanthropy support | Focus |
|---|---|---|
| Anthropic | Significant (Series A and via Moskovitz personal investment, with stake reportedly worth $500M+ transferred into Good Ventures) | Frontier model development with safety research |
| Machine Intelligence Research Institute (MIRI) | More than $14 million across multiple grants | Foundational alignment theory; founded by Eliezer Yudkowsky |
| Center for AI Safety | Tens of millions across 2022 and 2023 general support grants | Technical safety research, policy, field-building |
| Redwood Research | More than $20 million (including $10.7 million in 2023) | Empirical alignment, interpretability, and red-teaming |
| Alignment Research Center (ARC) | Multiple grants since 2022 | Theoretical alignment research |
| METR (formerly ARC Evals) | Significant support since 2023 spinout | Evaluation of dangerous capabilities in frontier models |
| Center for Human-Compatible AI (CHAI) at UC Berkeley | Multi-million grants over multiple years | Academic AI safety research, founded by Stuart Russell |
| Berkeley Existential Risk Initiative (BERI) | Multiple grants | Operational support for academic existential-risk research groups |
| Long-Term Future Fund | Annual support | Regranting to small and early-stage AI safety projects and individuals |
| Center for Security and Emerging Technology (CSET) | Multi-million grants | AI policy research at Georgetown |
Open Philanthropy and its primary funder are closely tied to Anthropic. When Anthropic completed its $124 million Series A in 2021, the round was led by Skype co-founder Jaan Tallinn and included Dustin Moskovitz's personal investment alongside James McClave, Eric Schmidt, and the Center for Emerging Risk Research, with Open Philanthropy itself contributing a roughly $30 million grant or investment to Anthropic that same year. Moskovitz later transferred his Anthropic stake, by then reportedly worth around $500 million, into Good Ventures to address the appearance of conflict of interest, given Karnofsky's marriage to Anthropic president Daniela Amodei.[28] The relationship has remained a source of internal debate and external criticism. Supporters argue that having a safety-focused frontier lab is a high-value bet even if the funder benefits financially; critics argue that funding a for-profit AI lab while simultaneously funding research warning about the dangers of frontier AI is internally inconsistent.[29]
Open Philanthropy's AI Governance and Policy team, led since 2015 by Luke Muehlhauser, focuses on building U.S. and international institutional capacity to govern advanced AI systems. The team funds think tanks, academic centers, fellowships placing technical experts in government roles, and policy research on compute governance, model evaluations, and international AI safety institutes. By the mid-2020s the AIGP team was deploying more than $100 million per year.[17]
Open Philanthropy is the most prominent institutional expression of the effective altruism (EA) movement, which argues donors should use evidence and reason to do the most good per dollar. Karnofsky, Berger, Muehlhauser, and most senior staff are publicly identified with effective altruism, and the organization has long collaborated with EA-aligned groups including the Centre for Effective Altruism, 80,000 Hours, and the Long-Term Future Fund.[5]
In November 2022, the cryptocurrency exchange FTX collapsed and its founder, Sam Bankman-Fried, was charged with fraud. Bankman-Fried had publicly identified with effective altruism and had launched the FTX Future Fund in February 2022 with the goal of giving at least $100 million in its first year, possibly as much as $1 billion. Many Future Fund recipients were AI safety, biosecurity, and longtermist organizations that overlapped substantially with Open Philanthropy's portfolio. When FTX collapsed, the Future Fund team resigned in an open letter, grants worth tens of millions of dollars were either clawed back or never paid, and the broader EA movement faced a sustained reputational crisis.[30]
Dustin Moskovitz wrote publicly after the collapse that the EA movement needed a clearer story rejecting ends-justifying-means reasoning. Open Philanthropy moved quickly to backstop a number of organizations whose Future Fund commitments fell through, contributing to the spending acceleration of 2023 to 2025. Press coverage of EA during this period was sustained and often critical, with The New York Times, The Washington Post, TIME, and Fortune running long features on the movement's culture, governance, and entanglement with AI labs.[31][32]
| Name | Role | Background |
|---|---|---|
| Alexander Berger | CEO (sole CEO since 2023) | Joined GiveWell Labs at its founding; led policy philanthropy and Global Health and Wellbeing portfolio |
| Cari Tuna | Board Chair | Co-founder of Good Ventures; former Wall Street Journal reporter |
| Dustin Moskovitz | Board Member | Co-founder of Facebook (2004) and Asana (2008); primary funder via Good Ventures |
| Divesh Makan | Board Member | Founder, Iconiq Capital |
| Holden Karnofsky | Board Member; former Co-CEO | Co-founder of GiveWell (2007); led AI Cluster until 2023; joined Anthropic in 2025 |
| Luke Muehlhauser | Senior Program Officer, AI Governance and Policy | Joined 2015 from MIRI, where he was executive director |
Karnofsky also continues to write the Cold Takes blog, which articulated many of the worldviews now central to Open Philanthropy's AI cluster, including the "most important century" framing.[14] Open Philanthropy operates as a Delaware LLC paired with the Open Philanthropy Action Fund, a 501(c)(4) that handles grantmaking with a meaningful lobbying or political component. The LLC structure allows the organization to fund a wide range of activities, including for-profit investments, without the constraints of conventional 501(c)(3) status, although most grants go to charitable recipients.[33]
Open Philanthropy's cause-selection framework relies on three criteria summarized as INT: importance (the scale of the problem and the depth of harm), neglectedness (whether the cause is already well funded by other actors), and tractability (the likelihood that an additional dollar can produce meaningful improvement). Within causes, the organization uses a hits-based giving philosophy similar to early-stage venture capital, accepting that most grants may not produce measurable impact in exchange for the chance that a small number produce extraordinary returns. The Global Health and Wellbeing portfolio is operationalized using a 1,000-times-cash bar (relative to GiveDirectly's unconditional cash transfers) for new programs.[34][13]
Open Philanthropy has attracted both significant praise and significant criticism. Supporters argue that its public reasoning, willingness to fund neglected causes, and unusual focus on cost-effectiveness have shifted the broader philanthropic landscape and produced demonstrable wins in farm animal welfare, biosecurity field-building, and global health. Many of its grantees, including the Center for AI Safety, METR, and Blueprint Biosecurity, would not exist at scale without sustained Open Philanthropy support.[35]
Critics have raised several recurring concerns. A structural concern is that a single funder with such outsized influence in AI safety and longtermist causes can distort research agendas, crowd out alternative perspectives, and shape entire fields toward the worldview of its donors and senior staff. A 2024 Washington Examiner investigation examined Open Philanthropy's role in shaping U.S. AI policy during the Biden administration and characterized it as a tightly networked influence operation.[36] A conflict-of-interest concern centers on the closeness of Open Philanthropy, Good Ventures, and Anthropic (the Karnofsky-Amodei marriage and the Moskovitz personal investment), the subject of repeated coverage including a 2023 New York Times feature on the EA-OpenAI-Anthropic nexus during the OpenAI board crisis.[37] A methodological concern from some biosecurity, animal-welfare, and global-development practitioners is that Open Philanthropy's worldview gives too much weight to extreme low-probability tail risks at the expense of nearer-term suffering.[26]