Toby David Godfrey Ord (born 18 July 1979) is an Australian moral philosopher based at the University of Oxford. He founded Giving What We Can in November 2009, the pledge organisation that became one of the seed projects of the modern effective altruism movement, and from 2014 to 2024 he was a research fellow and then senior research fellow at Oxford's Future of Humanity Institute (FHI). Since FHI's closure in April 2024, Ord has been a Senior Researcher at the Oxford Martin AI Governance Initiative.[1][2]
Ord is best known for his 2020 book The Precipice: Existential Risk and the Future of Humanity, which argued that the present century carries a roughly one-in-six chance of an existential catastrophe and assigned the largest single share of that risk to misaligned advanced AI. The book has become a foundational reference for modern existential risk and AI safety discourse, cited alongside Nick Bostrom's Superintelligence in policy briefings, academic curricula, and mainstream coverage of long-run risk.[3][4]
Within academic philosophy, Ord works on consequentialism, moral uncertainty, global priorities, and the ethics of risk. He has personally taken the Giving What We Can Pledge in an unusually demanding form, capping his own annual living expenses near £20,000 and committing the rest of his lifetime earnings to effective charities. Outside academia he has advised the United Nations, the World Health Organization, the OECD, the World Bank, and the UK Prime Minister's Office.[1][5]
Ord was born on 18 July 1979 in Melbourne, Australia. His parents took part in anti-nuclear marches during the late Cold War and sometimes brought him along, an exposure he has cited as an early source of his interest in catastrophic risk.[6]
He attended the University of Melbourne in the late 1990s, where he initially read computer science. After completing his Bachelor of Science with first-class honours, he took a second undergraduate degree in philosophy at the same university, motivated by an interest in ethics. He completed both degrees between 1997 and 2002.[1][6]
In 2003 Ord moved to the University of Oxford on a Commonwealth Scholarship to read for the Bachelor of Philosophy (BPhil). He stayed at Oxford for his DPhil, completed in 2009 with a thesis titled "Beyond Action: Applying Consequentialism to Decision Making and Motivation." His doctoral supervisors were Derek Parfit, whose work on personal identity, future generations and population ethics shaped much of Ord's later thinking, and John Broome, the moral philosopher and economist whose writing on decision theory and weighing future people influenced Ord's approach to expected-value reasoning under uncertainty.[1]
After submitting his DPhil, Ord was retained at Oxford as a Junior Research Fellow at Balliol College in 2009. He held a series of research positions at Oxford for the next fifteen years, including a Research Fellowship at FHI from 2014, promotion to Senior Research Fellow in 2019, and an affiliation with the Oxford Uehiro Centre for Practical Ethics.[1][2]
FHI was the institutional centre of much of Ord's research life. Founded in November 2005 by Nick Bostrom inside Oxford's Faculty of Philosophy and Oxford Martin School, the institute was a small, ambitious group whose work on existential risk, anthropic reasoning, and the long-run future of humanity helped shape what would later become the field of AI safety research. His research interests, broadly stated as the big-picture questions facing humanity, encompassed existential risk, longtermism, global health, moral uncertainty, hypercomputation, and the ethics of consequentialism.[2][7]
Following FHI's closure on 16 April 2024, Ord moved to the Oxford Martin School's AI Governance Initiative as a Senior Researcher, while keeping a research affiliation with Forethought and a board seat at the Centre for the Governance of AI (GovAI), a successor research group that spun out of FHI in 2021. He has been a trustee of the Centre for Effective Altruism and of 80,000 Hours.[2][8]
The Precipice: Existential Risk and the Future of Humanity was published by Bloomsbury in the United Kingdom on 5 March 2020 and by Hachette Books in the United States on 24 March 2020. It runs to 480 pages, including roughly 150 pages of notes and appendices that document the empirical estimates underlying the main text.[3]
The book's central thesis is that humanity is living through a uniquely dangerous period, which Ord names "the Precipice." He locates its beginning in 1945 with the Trinity test, the moment at which humanity first acquired the capacity to inflict damage at a possibly civilisational scale. The Precipice is the gap between humanity's destructive power and its political wisdom and self-control.[3]
Ord organises catastrophic risks into three categories. Natural risks, such as asteroid impacts, supervolcanic eruptions, and nearby stellar explosions, he estimates as small (collectively around one in 10,000 per century). Well-understood anthropogenic risks, including nuclear war and climate change, he treats as material but not dominant. Anthropogenic risks from emerging technologies, including engineered pandemics, unaligned artificial general intelligence, and permanent global totalitarianism enabled by surveillance technology, he treats as the largest and most neglected categories.[3]
The headline numerical estimate of the book is that the total probability of an existential catastrophe in the twenty-first century is approximately one in six. Within this, Ord assigns about a one-in-ten chance to risk from unaligned AI alone, more than the combined risk from all other sources by his accounting. He gives roughly one in 30 to engineered pandemics, smaller numbers to nuclear war and climate change, and a residual category to unforeseen risks. He is explicit that these figures are subjective probabilities, intended to express orders of magnitude rather than precise predictions.[3][4]
A second philosophical move is the distinction between extinction and "existential catastrophe" more broadly construed. Following earlier work by Bostrom, Ord defines an existential catastrophe as any event that destroys the long-term potential of humanity, including outright extinction but also a permanent civilisational collapse or a stable totalitarian regime.[3]
Reviews were broadly positive. The New York Review of Books ran a long essay by Jim Holt; The Guardian, Financial Times, Bloomberg, The Atlantic and Time all reviewed it; and Bill Gates included it on his summer reading list.[4][9][10]
Ord's DPhil thesis, Beyond Action: Applying Consequentialism to Decision Making and Motivation (2009), argued for what he called a "global" form of consequentialism that applies the standard of best outcomes not only to actions but also to motivations, decision procedures, and character traits.[1]
In 2020 he published Moral Uncertainty with Oxford University Press, co-authored with William MacAskill and the Swedish philosopher Krister Bykvist. The book is the first systematic monograph on how rational agents should act when uncertain not just about empirical facts but about which moral theory is correct. It proposes the Maximize Expected Choiceworthiness criterion, an analogue of expected-utility theory that aggregates across moral theories.[11]
Ord has also published academic papers on global priorities, hypercomputation, the reversal test in applied ethics (with Bostrom), and on the marginal cost-effectiveness of health interventions. His paper "The Moral Imperative Toward Cost-Effectiveness in Global Health" (Center for Global Development, 2013) influenced GiveWell-style charity evaluation by documenting order-of-magnitude differences in cost per disability-adjusted life year between health interventions.[12]
Since 2024 he has published a widely discussed AI scaling series on his personal site, including "The Scaling Paradox" and "The Extreme Inefficiency of Reinforcement Learning," arguing that recent frontier-model gains depend less on raw pretraining compute and more on inference-time and reinforcement-learning methods that may scale less favourably than headline trends suggest.[13]
In November 2009, while a graduate student at Oxford, Ord launched Giving What We Can (GWWC), an international society whose members publicly pledge to donate at least 10 percent of their lifetime income to charities they believe will do the most good. He founded the organisation together with his wife Bernadette Young and William MacAskill, then also a graduate student. Within a year the society had 64 members who had collectively pledged more than $20 million in future donations.[14][15]
Ord's own version of the pledge was unusually strict. He announced that he would cap his annual personal spending at £20,000 (later revised down to £18,000), with the threshold rising annually with inflation, and donate everything he earned above that level. By December 2019 he had given away approximately £106,000, or 28 percent of his cumulative income, and has stated he expects his lifetime donations to exceed £1 million.[1]
GWWC was initially staffed by volunteers under the umbrella of FHI, Balliol College, and the Oxford Uehiro Centre. In 2012 it transitioned to full-time staff and was incorporated under the Centre for Effective Altruism, which Ord and MacAskill helped establish around the same time. By 2024 the society had grown beyond 9,000 active members across more than 100 countries, with cumulative reported donations exceeding $300 million.[14][16]
Ord is one of the small group of Oxford-based researchers credited as co-founders of the broader effective altruism movement. The phrase "effective altruism" was coined in 2011 around the formation of the Centre for Effective Altruism, and the umbrella project drew much of its initial framing from the GWWC pledge, MacAskill and Benjamin Todd's 80,000 Hours career-research project, and methodology in use at GiveWell.[17]
Within EA, Ord has been associated with the global-priorities wing. His writing emphasises cause neutrality, expected-value reasoning under uncertainty, and taking seriously both near-term and long-run effects. He has been an interlocutor for funders including Open Philanthropy, backed by Cari Tuna and Dustin Moskovitz. His scale-tractability-neglectedness framework for cause-area triage has become standard in EA-aligned grantmaking.[5]
Ord is one of the principal popularisers of longtermism, the view that influencing the long-run future is among the most important moral priorities of our time. Whereas Bostrom's earlier writing established the conceptual scaffolding, and MacAskill's 2022 book What We Owe the Future presented the argument in its broadest popular form, The Precipice gave longtermism its most quantitative and policy-oriented presentation.[3][18]
In Ord's framing, taking the long-run future seriously does not require strong claims about vast hypothetical future populations. Even on conservative estimates of survival, the expected number of future human lives is so large compared to the present that even small reductions in existential risk are highly cost-effective in welfare terms.[3]
Ord's specific probability estimates have been the subject of detailed discussion. The one-in-six headline figure has been cited and contested in academic philosophy, AI policy circles, and popular reviews. Ord has emphasised in subsequent essays, including "The Precipice Revisited," that the figures should be read as orders of magnitude and that his views on specific risks have shifted with new evidence.[19]
Ord has played an unusually active public-policy role for an academic philosopher. He has briefed and advised a range of national and international bodies, including the UK Prime Minister's Office, the United Nations, the World Health Organization, the OECD, and the World Bank, and has spoken at policy events convened around G7 and G20 meetings.[2][8]
Ord was one of the outside experts consulted during preparation of the United Nations Secretary-General's Our Common Agenda report, published in September 2021. He and other longtermism-aligned researchers contributed advice on the report's sections dealing with future generations and existential risk, and a chapter of the published report addressed long-term risks explicitly. He was not a member of the Secretary-General's High-Level Advisory Board on Effective Multilateralism (HLAB), the 12-person panel co-chaired by Ellen Johnson Sirleaf and Stefan Löfven that produced the Breakthrough for People and Planet report in 2023.[20]
Ord has been a regular voice in mainstream media coverage of AI risk, with interviews and op-eds in The Guardian, the BBC, The Atlantic, Time, The Times of London, and Bloomberg. He is a frequent guest on long-form podcasts, including 80,000 Hours, The Sam Harris Podcast, Lex Fridman's podcast, and the Future of Life Institute podcast. He has argued for two complementary tracks of action: technical research on AI alignment and capability evaluations, and international AI governance backed by enforceable agreements analogous to existing nuclear non-proliferation regimes.[21][22]
His recent writing has addressed AI capability evaluations, the relative roles of pretraining compute and inference-time techniques in driving recent capability gains, and the implications for governance. The 2024-2025 AI scaling series on his personal site has been widely discussed both inside the AI safety community and among outside commentators on the economics of frontier model development.[13]
Ord is not among the named signatories of the May 2023 Statement on AI Risk organised by the Center for AI Safety (CAIS), which was signed by figures including Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates and Peter Singer. His public engagement on similar questions runs both before and after that statement.[23]
Ord lives in Oxford with his wife, Bernadette Young, an infectious-disease physician at the John Radcliffe Hospital who also works on medical ethics and public-health policy. The couple have one child, a daughter. Young was a co-founder of Giving What We Can with her husband and MacAskill. Ord and Young have said that they treat the costs of raising their daughter as outside the scope of their giving threshold.[1][24]
Ord's financial life remains a public commitment, partly because the GWWC pledge is by design transparent. The cap on his personal spending (originally £20,000, later £18,000) is adjusted upward with UK inflation. The pattern of his giving has shifted over the years, with earlier years tilted toward global health charities recommended by GiveWell and later years allocating a larger share to AI safety, biosecurity, and other longtermist organisations.[1][5]
The following table summarises Ord's main academic and organisational roles.
| Role | Organisation | Period |
|---|---|---|
| Junior Research Fellow | Balliol College, Oxford | 2009 onward |
| Research Fellow | Future of Humanity Institute | 2014 to 2019 |
| Senior Research Fellow | Future of Humanity Institute | 2019 to 2024 |
| Senior Researcher | Oxford Martin School AI Governance Initiative | 2024 onward |
| Research Affiliate | Forethought | 2024 onward |
| Board member | Centre for the Governance of AI | 2021 onward |
| Affiliated researcher | Oxford Uehiro Centre | various |
| Founder | Giving What We Can | 2009 onward |
FHI, Ord's institutional home for most of his research career, was closed by the University of Oxford on 16 April 2024 after years of administrative friction with the Faculty of Philosophy. According to FHI's 2024 final report, the Faculty imposed a freeze on hiring and fundraising in 2020 and in late 2023 decided not to renew the contracts of remaining staff. Bostrom described the process publicly as a "death by bureaucracy."[7][25]
Ord regretted the closure publicly. He moved his Oxford affiliation to the Oxford Martin School's AI Governance Initiative and kept his trustee and advisory roles at GovAI, Forethought, and the Centre for Effective Altruism. FHI's intellectual legacy continues through these successor organisations, through the Centre for the Study of Existential Risk at Cambridge, the Forecasting Research Institute, and the body of work it published.[7][8]
Reception of Ord's work has been broadly positive in academic philosophy, AI safety, and policy circles. The Precipice has been described as a careful and methodologically transparent contribution that took an idea previously confined to a small academic literature and presented it in a form that policymakers, journalists, and general readers could engage with. Reviews in The New York Review of Books, The Guardian, The Financial Times, Time, and The Atlantic were generally favourable. The book has been cited in subsequent work by Bostrom, MacAskill, Eliezer Yudkowsky, and others.[4][9][26]
Criticisms have come from several directions. Within philosophy, some scholars have questioned whether subjective probability estimates of low-frequency, high-impact events can be calibrated meaningfully, and whether the expected-value framework is the right tool for civilisational decisions. Others have challenged specific numbers, particularly the low estimate of natural risk and the high estimate of AI risk. From outside academic philosophy, critics including the philosopher Émile Torres and the computer scientist Timnit Gebru have placed The Precipice within a broader cluster of techno-utopian ideologies they label TESCREAL, arguing that the longtermist framing risks deprioritising present-day harms in the name of hypothetical future populations. Ord has been included in critical analyses of the longtermist movement, although his book anticipates several of these objections and avoids the strongest population-ethics claims that have drawn the most criticism.[27][28]
Unlike his collaborator MacAskill, Ord was not a publicly identified advisor to FTX or its philanthropic arm the FTX Future Fund, and he was not part of the Future Fund's board that resigned during the November 2022 collapse. He has therefore been less directly entangled with the FTX controversy than other senior EA figures.[29]
Following FHI's closure, Ord has continued active research and public engagement from his base at the Oxford Martin AI Governance Initiative. His AI scaling series, published serially on his personal site beginning in 2024, has become his most-cited recent work. "The Scaling Paradox" argues that AI scaling laws are in tension with their compute requirements, and "The Extreme Inefficiency of Reinforcement Learning" presents a quantitative case that current RL methods used for chain-of-thought training are several orders of magnitude less compute-efficient than pretraining. The series has been cited in policy commentary, financial analyst reports on AI infrastructure, and academic discussion of capability forecasting.[13][30]