William David MacAskill (born William Crouch; 24 March 1987) is a Scottish philosopher, author, and the leading public figure of the [[effective_altruism|effective altruism]] movement. He is an Associate Professor of Philosophy at the [[oxford_university|University of Oxford]] and a Senior Research Fellow at Forethought, an AI macrostrategy research group he co-founded in 2024 [1][2]. MacAskill co-founded [[giving_what_we_can|Giving What We Can]] (2009), the [[centre_for_effective_altruism|Centre for Effective Altruism]] (2012), and [[80000_hours|80,000 Hours]] (2011), and was a co-founder of the Global Priorities Institute at Oxford [3]. He is the author of Doing Good Better (2015) and the New York Times bestseller What We Owe the Future (2022), the latter of which popularized the philosophy of [[longtermism|longtermism]]. He is also a co-author, with [[krister_bykvist|Krister Bykvist]] and [[toby_ord|Toby Ord]], of the academic monograph Moral Uncertainty (2020) [4].
MacAskill rose to international prominence in the early 2020s as a public intellectual associated with longtermism, the view that positively influencing the long-term future is a key moral priority of the present. His public profile was complicated severely by the November 2022 collapse of [[ftx|FTX]], the cryptocurrency exchange founded by [[sam_bankman_fried|Sam Bankman-Fried]], who had been one of the largest funders of MacAskill's projects through the [[ftx_future_fund|FTX Future Fund]] [5]. After the collapse MacAskill stepped back from senior leadership roles in EA, including resigning as a trustee of Effective Ventures UK in September 2023 [6]. Since 2024 he has focused on AI macrostrategy through Forethought.
MacAskill was born on 24 March 1987 in Glasgow, Scotland, and grew up in the city. He attended Hutchesons' Grammar School, an independent school in Glasgow [1]. He was originally named William David Crouch; he later took the surname MacAskill, his maternal grandmother's maiden name, on marrying the philosopher [[amanda_askell|Amanda Askell]] (born Amanda Hall) in 2013, when both adopted the same surname. The two divorced; Askell subsequently used "Askell" as her surname, while MacAskill retained the surname under which he had become publicly known [7].
MacAskill earned a BA in philosophy from Jesus College, [[cambridge_university|Cambridge]] in 2008. He then moved to Oxford, completing the BPhil at St Edmund Hall in 2010 and the DPhil at St Anne's College in 2014, spending a year at Princeton University as a visiting graduate student. His doctoral thesis, "Normative Uncertainty," was supervised by [[john_broome|John Broome]] and [[krister_bykvist|Krister Bykvist]] [1][3]. The thesis argued that we ought to act under uncertainty about which ethical theory is correct much as we act under uncertainty about ordinary matters of fact, by taking expected value across theories weighted by our credence in each. This framework, later expanded into the book Moral Uncertainty, became one of MacAskill's most cited contributions to academic philosophy.
After completing his DPhil, MacAskill held a Junior Research Fellowship at Emmanuel College, Cambridge. In 2015 he was elected to an Associate Professorship at Oxford in association with a fellowship at Lincoln College, becoming, at age 28, one of the youngest associate professors of philosophy in Oxford's history [3]. He moved to a longer-term association with the Global Priorities Institute (GPI), an interdisciplinary research centre at Oxford that aims to conduct foundational research informing how to do as much good as possible. MacAskill is a co-founder of GPI and has chaired its advisory board [8].
In 2018 he founded the Forethought Foundation for Global Priorities Research, an Oxford-based body supporting academic work in global priorities research outside GPI's institutional structure. The foundation operated until 2024, when its operations were rolled into a new, distinct organization also called Forethought, focused on AI macrostrategy [2][8].
MacAskill's academic specialties include moral uncertainty, [[population_ethics|population ethics]], longtermism, and [[decision_theory|decision theory]] under risk. His scholarly papers have appeared in journals including Ethics, Mind, the Journal of Philosophy, and Philosophy and Public Affairs. Much of his technical output concerns the foundations of effective altruism: how to compare interventions across cause areas, how to act when uncertain between consequentialist and non-consequentialist ethics, and how to weight the interests of future people against those of presently existing ones.
Doing Good Better: How Effective Altruism Can Help You Make a Difference was published by Gotham Books, an imprint of Penguin Random House, on 28 July 2015 [9]. The book is generally considered the first mass-market introduction to the effective altruism movement. MacAskill argued that ordinary giving habits in wealthy countries fail to direct resources toward the highest-impact charitable interventions, and he proposed five key questions a donor should ask, including how many people will benefit, by how much, whether the area is neglected, and what counterfactual impact a contribution will have.
The book popularized [[givewell|GiveWell]]-style charity evaluation outside the small EA community and made the case that earning a high income and donating a substantial fraction can produce more good than direct service work in many contexts. Doing Good Better received endorsements from the economist Steven Levitt, the philosopher [[peter_singer|Peter Singer]], and former Gates Foundation chief executive Sue Desmond-Hellmann, and was translated into multiple languages [9]. Reception was largely positive, though some charity professionals disputed MacAskill's specific case studies.
Moral Uncertainty, co-authored with [[krister_bykvist|Krister Bykvist]] and [[toby_ord|Toby Ord]], was published by Oxford University Press in 2020 [4]. The scholarly monograph develops a framework for action when an agent assigns positive credence to multiple, potentially incompatible moral theories. The central proposal is "Maximize Expected Choiceworthiness" (MEC), an analogue of expected utility theory in which the rightness of options is weighted by their choiceworthiness under each theory and by the agent's credence in that theory. The book also addresses intertheoretic comparisons of value and how to handle theories that lack ordinal or cardinal structure.
What We Owe the Future was published by Basic Books in the U.S. in August 2022 and by Oneworld Publications in the U.K. [10]. It is MacAskill's main popular argument for longtermism, which he defines as "the idea that positively influencing the long-term future is a key moral priority of our time." The book argues from three premises: future people count morally, there could be very many of them, and we can predictably influence how their lives go. From these premises MacAskill defends an agenda focused on reducing [[existential_risk|existential risk]], shaping long-lasting institutions, preventing the entrenchment of bad values, and supporting differential technological development.
The book reached the New York Times bestseller list and received extensive media attention, including a New Yorker profile by [[gideon_lewis_kraus|Gideon Lewis-Kraus]] titled "The Reluctant Prophet of Effective Altruism," published 8 August 2022 [11]. MacAskill appeared on Sam Harris's Making Sense, The Ezra Klein Show, The Tim Ferriss Show, Conversations with Tyler, and Lex Fridman's podcast, among many others [12]. The launch was supported by a large promotional effort funded in part through Effective Ventures; Open Philanthropy made related grants including reimbursements for student groups distributing copies for reading groups [13].
Reception was mixed. Mainstream reviewers in The New York Times, The Guardian, and Foreign Affairs generally praised the book's ambition, while academic reviewers engaged more skeptically with its population-ethics commitments. The MIT philosopher Kieran Setiya, writing in the Boston Review, argued that the book's appeal to expected-value reasoning over astronomical numbers of future people relied on contestable population-ethical premises [14]. Other critics, particularly [[emile_torres|Émile P. Torres]], rejected the longtermist project more fundamentally [15].
MacAskill co-founded [[giving_what_we_can|Giving What We Can]] (GWWC) in November 2009 with the philosopher [[toby_ord|Toby Ord]], then a postdoctoral researcher at Oxford. The organization invites members to take the GWWC Pledge, a public commitment to donate at least 10 percent of lifetime earnings to highly effective charities [16]. The launch was modest: 23 founding members at an event hosted at Oxford. By March 2022, the University of Oxford reported that GWWC had received over $2.5 billion in lifetime pledged donations and had grown into a global community [16]. By 2024, GWWC reported more than 9,000 members and over $500 million in tracked donations to recommended charities, making it one of the most visible effective-giving institutions in the world.
[[80000_hours|80,000 Hours]] was founded in 2011 by MacAskill and [[benjamin_todd|Benjamin Todd]], who became its longtime CEO. The name refers to the approximate number of hours in a 40-year career, framing career choice as a high-impact ethical decision. The organization initially offered broad advice to graduates seeking high-impact work in global health, animal welfare, and policy. Through the late 2010s and early 2020s it shifted its priority focus toward [[ai_safety|AI safety]], biosecurity, and other areas the team considered most pressing for reducing [[existential_risk|existential risk]]. 80,000 Hours runs a research site, a coaching program, a job board, and a long-running podcast hosted for many years by Rob Wiblin.
The [[centre_for_effective_altruism|Centre for Effective Altruism]] (CEA) was incorporated in 2012 as an umbrella organization for GWWC and 80,000 Hours, with MacAskill as one of its principal founders. CEA later spun out additional projects, including the EA Funds, the EA Forum, and the EA Global conference series. In 2020 CEA was reorganized under a parent entity, [[effective_ventures|Effective Ventures]] Foundation (EVF), which sits as the legal vehicle for several EA projects. MacAskill served as a trustee of EVF UK from its founding [6].
In 2024, MacAskill co-founded a new research group called Forethought together with Max Dalton, Tom Davidson, and Amrit Sidhu-Brar [2]. Built out of the operational shell of the Forethought Foundation, the new Forethought is described as an AI macrostrategy research group focused on how to navigate a potentially rapid transition to a world with very capable AI systems, including questions of [[agi|AGI]] preparedness, AI rights, space governance, and what a good post-AGI society should look like.
MacAskill's relationship with [[sam_bankman_fried|Sam Bankman-Fried]] began in 2012, when MacAskill, then a graduate student, met Bankman-Fried, then an undergraduate at MIT, and discussed effective altruism over lunch [17]. According to multiple later accounts, MacAskill encouraged Bankman-Fried to consider "earning to give" via finance rather than alternative careers, and Bankman-Fried subsequently took a position at Jane Street Capital, then founded Alameda Research in 2017 and the cryptocurrency exchange FTX in 2019.
In early 2022, FTX launched the [[ftx_future_fund|FTX Future Fund]], a philanthropic vehicle described as an effective altruism and longtermism funder, with MacAskill listed among its external advisors. MacAskill publicly welcomed the launch on Twitter, describing the Fund as "based on longtermist principles: trying to make a world that our great-great-great-grandchildren will be thankful for" [18]. The Future Fund made grants exceeding $160 million in 2022 to organizations working on biosecurity, AI safety, and longtermist research [19].
In September 2022 MacAskill texted Elon Musk, who was then in the process of acquiring Twitter, offering to introduce him to Bankman-Fried as a potential co-investor. Text messages later released as part of litigation showed MacAskill suggesting that Bankman-Fried could contribute up to $8 billion to a Twitter acquisition without outside financing, and vouching for Bankman-Fried's character when Musk asked [20]. Musk did not accept the introduction, later remarking that Bankman-Fried had "set off my BS detector."
FTX collapsed in early November 2022 amid allegations of misuse of customer funds. On 11 November 2022, MacAskill posted a Twitter thread expressing shock and condemning Bankman-Fried's apparent conduct. The following day he published a longer personal statement on the EA Forum titled "A personal statement on FTX," in which he wrote that he could not decide "which emotion is stronger: my utter rage at Sam (and others?) for causing such harm to so many people, or my sadness and self-hatred for falling for this deception," and added: "If that goodwill laundered fraud, I am ashamed" [21]. The entire FTX Future Fund team resigned simultaneously.
[[effective_ventures|Effective Ventures]] commissioned an independent investigation by the U.S. law firm Mintz into its relationship with FTX. The investigation, involving dozens of interviews and review of tens of thousands of documents, concluded in 2023; Mintz reported no evidence that anyone at EV had known of the alleged fraud at FTX or Alameda Research [22]. The U.K. Charity Commission concluded its own statutory inquiry into Effective Ventures Foundation in May 2024 [23].
MacAskill had been recused from FTX-related discussions on the EV UK board from early November 2022 onward. In September 2023 he announced that he was stepping down as a trustee of Effective Ventures UK, citing the protracted recusal and slow pace of trustee recruitment [6]. Bankman-Fried was convicted on multiple counts of fraud and conspiracy in November 2023 and sentenced to 25 years in federal prison in March 2024.
"Earning to give," the practice of pursuing a high-income career in order to donate large sums to effective charities, was articulated within the early EA community in collaboration between MacAskill and [[toby_ord|Toby Ord]]. MacAskill became its most prominent advocate in the early 2010s, and was the subject of media coverage in The Atlantic, The Washington Post, and Quartz; he also appeared on CNBC in early 2013 in a segment titled "Wall Street Saves the World?" [24]. By the time Doing Good Better was published, MacAskill had begun to deemphasize earning to give, writing that only a small proportion of EA-aligned graduates should pursue it long-term and that direct work in priority areas would often be higher-impact. The 2012 lunch with Bankman-Fried, in which MacAskill encouraged a path through finance, became one of the most discussed examples of the strategy in light of the FTX collapse.
MacAskill was a featured speaker at TED in 2018 and has spoken at the World Economic Forum at Davos, EA Global conferences, and many universities. He has given congressional and parliamentary briefings on existential risk and global catastrophic risks. His 2022 book tour included appearances on The Daily Show, The Late Show with Stephen Colbert, and other broadcast outlets, unusual mainstream television attention for an academic philosopher.
In the press, MacAskill has been profiled in The New Yorker (2022, by [[gideon_lewis_kraus|Gideon Lewis-Kraus]]) [11], Time magazine (2022 cover story on effective altruism by Naina Bajekal), Wired, The Guardian, and The New York Times. He has authored op-eds and essays in The New York Times, Foreign Affairs, the BBC, and other outlets.
MacAskill married the philosopher and AI researcher [[amanda_askell|Amanda Askell]] in 2013, with both adopting the surname MacAskill. They divorced in the late 2010s, after which Askell, now at [[anthropic|Anthropic]] working on AI alignment and policy, used "Askell" as her surname. The couple co-authored academic articles during their marriage [7].
MacAskill is a vegetarian on animal-welfare grounds and donates a substantial fraction of his income to effective charities. He has lived for most of his academic career in Oxford and is known for a frugal personal lifestyle that became part of media coverage during the What We Owe the Future book tour.
What We Owe the Future devoted substantial space to artificial intelligence as a potential source of [[ai_existential_risk|existential risk]] and as a technology likely to shape values and institutions far into the future. Through CEA, 80,000 Hours, and the Forethought Foundation, MacAskill helped channel attention and EA-aligned funding toward [[ai_safety|AI safety]] organizations including [[miri|MIRI]], [[redwood_research|Redwood Research]], [[arc_alignment_research_center|the Alignment Research Center]], and [[centre_for_ai_safety|the Centre for AI Safety]]. EA-aligned career advice via 80,000 Hours has been a major recruitment pipeline into AI safety roles at [[anthropic|Anthropic]] and several university-based labs. Through his connections to [[open_philanthropy|Open Philanthropy]], whose principal funder [[dustin_moskovitz|Dustin Moskovitz]] has been a long-time EA donor, MacAskill has been part of the social network that has directed hundreds of millions of dollars toward AI safety research [25].
The new Forethought, founded in 2024, makes AI macrostrategy its explicit focus. MacAskill has published on AGI preparedness and "the intelligence explosion," arguing that even moderate probabilities of transformative AI within decades imply a moral imperative to invest in technical safety, governance, and post-AGI institutions.
Critics of MacAskill and longtermism have come from several directions. The most sustained academic critique has come from [[emile_torres|Émile P. Torres]], a former effective-altruism participant turned vocal opponent, who argues that longtermism is a "dangerous" ideology with intellectual roots in transhumanism and eugenics. Together with the computer scientist [[timnit_gebru|Timnit Gebru]], Torres coined the acronym [[tescreal|TESCREAL]] for what they characterize as a bundle of related ideologies (transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism) that share progenitors and political tendencies [15]. MacAskill and other EA-aligned writers have rejected the TESCREAL framing as a polemical lumping of distinct traditions.
Kieran Setiya's Boston Review essay "The New Moral Mathematics" argued that the strong-longtermist case in What We Owe the Future depends on contested commitments in [[population_ethics|population ethics]], particularly the view that adding additional happy people to the world is, all else equal, a good thing [14]. Other philosophical reviewers, including Richard Yetter Chappell and Regina Rini, have engaged sympathetically while pressing on similar foundational issues.
A second strand of criticism focuses on EA-aligned career advice. Journalists and former EA participants have argued that MacAskill's pre-2015 emphasis on earning to give pushed talented young people into finance and crypto, and that his endorsement of Bankman-Fried as late as 2022 reflected insufficient due diligence on FTX [17][20]. The Mintz investigation found no evidence that EA leaders knew of fraud at FTX [22], but MacAskill has acknowledged in writing that he failed to apply ordinary skepticism in the case.
A third critique concerns the structure of EA itself: critics argue that the movement centralizes intellectual and financial influence in a small group of Oxford-educated philosophers and large funders, producing a movement less pluralistic than its public branding suggests.
Following his September 2023 resignation from the EV UK board, MacAskill reduced his public profile considerably. Through 2024 and into 2025 he has focused on Forethought, co-founded with Max Dalton, Tom Davidson, and Amrit Sidhu-Brar, and on academic writing on AGI preparedness, the intelligence explosion, and post-AGI governance [2]. He continues to hold his Associate Professorship at Oxford, but his public communications have shifted toward AI macrostrategy. He has begun returning to selected podcasts and conferences, particularly within the AI safety community.