80,000 Hours is a non-profit organisation based in the United Kingdom that produces research and advice on careers within the effective altruism tradition. It was founded in October 2011 by William MacAskill (then Will Crouch) and Benjamin Todd at the University of Oxford, and its name refers to the rough number of working hours a person spends across a typical 40-year career. 80,000 Hours encourages talented graduates, primarily aged 20 to 40, to direct their work toward what it considers the world's most pressing problems, with a current emphasis on risks from advanced artificial intelligence, catastrophic biorisks, nuclear war, great-power conflict, and improving institutional decision-making.[1][2]
The organisation is best known for four products: a free online career guide and problem profiles, a curated job board of high-impact roles, a one-on-one career advisory programme, and the long-form 80,000 Hours Podcast, hosted primarily by Robert Wiblin since June 2017. For most of its history it operated as a project under the Centre for Effective Altruism and later under the broader Effective Ventures umbrella; on April 1, 2025 it spun out as an independent legal entity. The current chief executive is Niel Bowerman, who took over from interim CEO Brenton Mayer in January 2024.[3][4]
The ideas that became 80,000 Hours took shape in Oxford in early 2011. MacAskill, then a graduate student in philosophy, and Todd, then completing his master's in physics and philosophy, had been involved in the founding of Giving What We Can under Toby Ord in 2009 and were extending that project's logic from charitable giving to career choice. Their argument was that for many ethically motivated graduates, choosing a career was a larger lever than choosing where to donate. They presented an early version of the idea at a meeting in February 2011, and the strong response led them to incorporate the project later that year.[1][5]
80,000 Hours was launched in October 2011 in Oxford as a project under what would soon be called the Centre for Effective Altruism. Todd became the organisation's first executive director and held the role through May 2022. The project went full-time in 2012, after which it began publishing career reviews, problem profiles, and the long-form research articles that came to define its house style. In its early years, the organisation maintained a relatively broad cause portfolio that gave substantial weight to global health, animal welfare, and policy work alongside emerging concerns about existential risk.[1][2]
MacAskill stepped back from operational involvement in the early 2010s to focus on academic philosophy and broader effective altruism work, including Doing Good Better (2015) and What We Owe the Future (2022). Todd remained the public face of the organisation through the late 2010s and authored the first edition of the 80,000 Hours: Find a fulfilling career that does good book in 2016.[6][7]
80,000 Hours organises its advice around a small set of frameworks that it has popularised within and beyond effective altruism.
The most widely cited is the scale, neglectedness and tractability framework, sometimes called the ITN framework. Problems are scored on three rough axes: how many beings are affected and how badly (scale), how much progress an additional unit of work or money is likely to buy (tractability or solvability), and how much existing effort is already directed at the problem (neglectedness). The framework was developed in dialogue with Holden Karnofsky and others at GiveWell in the early 2010s and has become a standard tool for cause prioritization across the effective altruism ecosystem.[1][8]
For evaluating individual career paths, the organisation distinguishes between several factors. Direct impact asks how much good a particular role does on its own terms. Career capital describes the long-term skills, credentials, network, and reputation that a role builds, on the assumption that early-career decisions matter mostly through the options they open up later. Personal fit acknowledges that the same job can be highly impactful for one person and unsuitable for another. Counterfactual impact asks whether a position would have been filled by someone else just as effectively, which can substantially reduce the marginal value of accepting a role at a popular organisation.[8][9]
In earlier years, 80,000 Hours placed significant weight on the strategy of earning to give, in which a person works in a high-paying field such as quantitative finance and donates most of their earnings to effective charities. The organisation has since downgraded this recommendation for most readers, arguing that direct work on top problems is usually higher-leverage given current funding levels in cause areas like AI safety and biosecurity.[10]
The organisation publishes a public list of what it considers the world's most pressing problems, updated periodically. As of the mid-2020s, the ranking puts AI risk at the top, followed by other catastrophic risks.
| Cause area | Current priority | Notes |
|---|---|---|
| Risks from transformative AI | Highest | Top-ranked problem since 2016; further emphasised in 2025 strategy update |
| Catastrophic biorisks | High | Engineered pandemics, biosecurity policy, dual-use research governance |
| Nuclear war | High | Escalation pathways and nuclear command-and-control reforms |
| Great-power conflict | High | Particularly US-China dynamics and their interaction with AI |
| Improving institutional decision-making | High | Forecasting, science of science, epistemic infrastructure |
| Building effective altruism | Medium-high | Movement-building, community health, governance |
| Global health | Notable but de-emphasised | Still recommended via GiveWell-style giving |
| Factory farming and animal welfare | Notable but de-emphasised | Recommended for those with strong personal fit |
80,000 Hours has described itself as having considered AI a top problem since around 2016, and sharpened that focus in its 2025 strategic update, which narrowed the primary focus to helping people work on safely navigating the transition to a world with advanced AI. The shift was framed as a response to faster-than-expected progress in frontier models since 2022.[11][12]
The organisation publishes a continually updated list of high-impact career paths, with detailed problem profiles, skill profiles, and career reviews. The current top recommendations cluster around AI and other catastrophic risks.
The job board, curated by staff, listed roughly 400 positions per month by 2024 and received about 75,000 monthly clicks, with the organisation reporting at least 200 confirmed placements during Niel Bowerman's prior tenure as Director of Special Projects.[3]
The 80,000 Hours Podcast launched in June 2017 with Robert Wiblin as host and Keiran Harris as producer. Episodes are unusually long for a content-marketing podcast, frequently running between two and four hours, and consist of in-depth interviews with researchers, policy figures, philosophers, and entrepreneurs working on the organisation's priority problems. Luisa Rodriguez joined as a second host in the early 2020s after working as William MacAskill's chief of staff at the Forethought Foundation; Keiran Harris also hosts a periodic spinoff feed.[13][14]
Notable guests have included Holden Karnofsky on AI takeoff and the "most important century" thesis, Toby Ord on long-term human survival in episode #6, Stuart Russell on flaws in current AI architectures in episode #80, William MacAskill on longtermism and the FTX episode, Yoshua Bengio on AI risk, Carl Shulman on AI takeoff, Ajeya Cotra on biological anchors for AI timelines, Hilary Greaves on global priorities research, Demis Hassabis on DeepMind, Holly Elmore on AI activism, and Sam Bankman-Fried before the collapse of FTX.[14][15]
By the end of 2024 the show had released around 45 main-feed interviews that year (up from 33 in 2022) and reported about 99,416 subscribers across platforms. In 2024 only roughly 12 of 38 main-feed episodes focused directly on AI, a ratio the organisation later said did not match its urgency on the topic; subsequent programming has skewed more heavily AI-focused. In 2025 the organisation launched a dedicated YouTube and video programme called "AI in Context," whose debut content reportedly drew nearly two million views in its first week.[12][16]
Though 80,000 Hours had ranked AI risk as its top problem since 2016, the organisation publicly tightened its focus on AI through a series of strategic updates between 2022 and 2025. After the FTX collapse and the resulting period of leadership turnover, the executive team led by Niel Bowerman from January 2024 conducted a strategic review that culminated in a 2025 announcement that 80,000 Hours would narrow its primary focus to helping people work on safely navigating the transition to a world with advanced AI.[3][12]
The pivot has not been universally welcomed inside effective altruism. Critics on the EA Forum have argued that the organisation has become "AI Safety 80K Hours" in practice, that its career recommendations now lean heavily on safety teams at frontier AI labs, and that this has narrowed the talent pipeline for academic and government work on AI. Others have raised concerns that some safety-branded roles at frontier companies are closer to capabilities work, or contribute indirectly to capabilities deployment. Staff have responded that the job board does not post pure capabilities roles and that listed positions are screened for their potential safety contribution rather than admitted as "safety washing."[17][18]
The pivot coincided with internal cultural changes. The organisation's 2023 to mid-2025 review reported a deliberate move toward a faster-paced culture, increased internal use of AI tools, and a higher weight on AI-related programming. Some senior staff departed during this period, and a portion of the marketing budget was reallocated from paid digital advertising toward content, notably video.[3]
80,000 Hours runs a suite of programmes that channel readers from initial exposure to direct career placement.
80,000 Hours is a registered charity and accepts no advertising, corporate sponsorship, or paid placements on its job board. Its largest funder by a wide margin is Coefficient Giving, the rebranded (as of 2025) name of Open Philanthropy, which is itself funded primarily by Dustin Moskovitz and Cari Tuna. Coefficient Giving and Open Philanthropy together have provided more than $20 million in cumulative grants to 80,000 Hours through 2025.[1][19]
Other listed funders have included individual effective-altruism-aligned donors such as Ben Delo, Luke Ding, Alex Gordon-Brown, Denise Melchin, and Jaan Tallinn, plus institutional sources such as the Frederick Mulder Foundation and the Effective Altruism Meta Fund. The organisation also received a $50,000 grant from Y Combinator in 2015 as part of the YC non-profit programme.[1]
The budget grew sharply with the headcount and AI focus. The 2023 to mid-2025 review reported financial costs of $10.38 million in 2024, up from $6.66 million in 2022, while staffing rose to 36.81 full-time equivalents in 2024, a 53 percent increase from 2022. The team has since grown further, with the organisation describing itself as having more than 50 full-time staff as of its 2025 spin-out.[2][3]
Like most large effective-altruism-aligned organisations, 80,000 Hours had received grants from the FTX Future Fund, the philanthropic arm associated with Sam Bankman-Fried's cryptocurrency exchange FTX and his trading firm Alameda Research. The Future Fund was established in February 2022 and disbursed roughly $160 million in grants before FTX's collapse in November 2022. Sam Bankman-Fried was also listed among 80,000 Hours' historical individual donors and had been cited in earlier organisational writing as an example of earning to give in practice.[1][20]
When FTX collapsed, 80,000 Hours faced reputational damage and concrete legal exposure. Howie Lempel, who had taken over as chief executive in May 2022 from Benjamin Todd, went on leave to serve as interim CEO of Effective Ventures Foundation (UK) during the immediate post-collapse crisis. Brenton Mayer ran 80,000 Hours as interim CEO from late 2022 until January 2024.[5][21]
In 2024 Effective Ventures UK and EV US reached a settlement with the FTX bankruptcy estate, paying $26,786,503 to the estate, an amount equal to 100 percent of the funds the two entities had received from FTX and the FTX Foundation in 2022. Because 80,000 Hours was at the time an EV-sponsored project, it was covered by this settlement rather than facing direct individual clawback proceedings. The settlement, combined with EV's plan to spin out its sponsored projects into independent legal entities, was a major factor in 80,000 Hours' April 1, 2025 transition to standalone non-profit status.[22]
80,000 Hours has received mainstream coverage and has shaped the career trajectories of thousands of effective-altruism-aligned graduates. The Atlantic's 2015 piece "The Greatest Good" highlighted the earning-to-give thesis, and profiles in The New York Times, The New Yorker, Wired, and The Guardian have treated 80,000 Hours as a serious source on careers and global priorities while probing its narrower commitments.[6][23][24]
A recurring critique inside effective altruism is that the organisation has become single-issue. Critics on the EA Forum and elsewhere have argued that 80,000 Hours' near-exclusive focus on AI risk since around 2022, and especially after its 2025 strategic update, crowds out attention to global health, animal welfare, and other cause areas where the marginal dollar may still be highly cost-effective. Some senior figures within the broader movement have left or distanced themselves over related concerns about scope.[17]
A second cluster of critiques targets demographic and institutional homogeneity. Recommended pathways tend to favour graduates of elite Western universities with backgrounds in philosophy, mathematics, or computer science, and the talent pipeline is concentrated in Oxford, San Francisco, Berkeley, and London. Critics including Émile P. Torres have argued that 80,000 Hours and adjacent organisations form part of what Torres and Timnit Gebru call the TESCREAL bundle, techno-utopian movements they argue are insufficiently attentive to present harms.[25]
A third strand concerns the AI labs question. Throughout 2023 and 2024, the EA Forum hosted extended debate about whether 80,000 Hours should recommend that talented graduates take roles at frontier AI companies including OpenAI, Anthropic, and Google DeepMind, given the labs' role in racing toward general-purpose AI. The organisation's position has been that it lists only roles plausibly contributing to safety and does not list pure capabilities roles. Critics have argued that even ostensibly safety-focused roles at frontier labs can have ambiguous net effects, particularly when staff rotate between safety and capabilities work over time.[18][26]
A fourth set of concerns dates from the FTX period. The pre-collapse celebration of Sam Bankman-Fried as a model earner-to-give, plus the fact that 80,000 Hours had received Future Fund grants, drew criticism after the fraud was exposed. Commentators argued that the broader earning-to-give framing had created perverse incentives by treating very-high-earning careers as virtuous so long as proceeds were eventually given away. The organisation has since substantially downplayed earning-to-give in its recommendations.[10][20]
80,000 Hours has played a measurable role in shaping the AI safety talent pipeline. The job board, advising programme, newsletter, and podcast have collectively introduced young researchers to organisations including MIRI, the alignment teams at Anthropic, OpenAI, and DeepMind, the Alignment Research Center, Redwood Research, the Center for AI Safety, the Centre for the Governance of AI, Open Philanthropy's grantmaking team, and government bodies including the UK AI Safety Institute and its US equivalent.[27]
Many self-identified AI safety researchers cite 80,000 Hours' problem profile on AI risk, the career guide, or specific podcast episodes as decisive in their early career. The framing of AI risk as a tractable problem amenable to career planning has been credited with making the field more accessible to junior people. Critics describe the same dynamic as a one-way pipeline into a small set of organisations with a particular intellectual culture.[27][28]
80,000 Hours operates within a small ecosystem of effective-altruism-aligned career-advice organisations. None of these are formally subsidiaries of 80,000 Hours, but they share some donors, collaborators, and methodological commitments.
In late 2025 several of these organisations began participating in a working group on coordination across the effective altruism career-advice ecosystem.[29]