Dario Amodei (born 1983) is an Italian-American artificial intelligence researcher, entrepreneur, and the co-founder and CEO of Anthropic, the AI safety company behind the Claude family of large language models. Before founding Anthropic, Amodei served as Vice President of Research at OpenAI, where he led the development of GPT-2 and GPT-3. He left OpenAI in late 2020 over concerns about the organization's direction on safety and commercialization, and in 2021 he co-founded Anthropic with his sister Daniela Amodei and five other former OpenAI researchers. Under his leadership, Anthropic has grown into one of the most valuable AI companies in the world, reaching a $380 billion valuation in February 2026. Amodei is known for his views on the near-term arrival of powerful AI systems and his advocacy for a pragmatic, safety-focused approach to AI development. He has authored influential essays including "Machines of Loving Grace" (2024) and "The Adolescence of Technology" (2025), testified before the U.S. Senate on AI risks, and was named to the Time 100 list of the world's most influential people in 2025.
Dario Amodei was born in 1983 in San Francisco's Mission District [1]. He grew up in a household shaped by two distinct cultural traditions. His father, Riccardo Amodei, was an Italian leather craftsman originally from Massa Marittima, a small town in Tuscany near the island of Elba. His mother, Elena Engel, is a Jewish-American woman from Chicago who worked as a project manager overseeing renovation and construction projects for public libraries in Berkeley and San Francisco [1][3].
Dario's sister, Daniela Amodei, was born four years after him. The siblings grew up in a household that valued intellectual curiosity and civic engagement. Dario's parents instilled in both children a strong sense of ethics and responsibility. "They gave me a sense of right and wrong and what was important in the world," Amodei later told journalist Alex Kantrowitz [3]. Daniela has recalled that Dario's aptitude was apparent from a young age; he would declare "counting days," seeing how high he could count [3].
Riccardo Amodei had long battled a rare illness and passed away in 2006, when Dario was in his early twenties [1]. The loss of his father during a formative period of his life has been noted by those close to Amodei as an experience that deepened his seriousness and his desire to work on problems of genuine consequence.
Amodei's educational path reflects his interdisciplinary interests, spanning physics, biophysics, and computational neuroscience.
Amodei attended Lowell High School, San Francisco's prestigious public magnet school known for its rigorous academics and emphasis on science and mathematics [1]. During high school, he was selected as a member of the USA Physics Olympiad team in 2000, demonstrating exceptional talent in physics at a national level [2].
Amodei began his undergraduate education at the California Institute of Technology (Caltech), where he worked with Professor Tom Tombrello in the Physics 11 program, a hands-on experimental physics course designed for exceptionally motivated students [1]. He later transferred to Stanford University, where he completed a Bachelor of Science in physics in 2006 [2].
Amodei pursued a PhD at Princeton University, supported by a prestigious Hertz Fellowship awarded in 2007 [2]. He worked under the supervision of Professors William Bialek and Michael Berry in the Department of Molecular Biology and the Princeton Neuroscience Institute. His doctoral research sat at the intersection of physics, biology, and computation, focusing on large-scale electrophysiology of neural circuits.
His dissertation, titled "Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior of Neural Circuits," developed novel methods for recording and driving the electrical behavior of nearly every neuron in a small patch of tissue. Specifically, he studied network dynamics of over 200 cells in a 0.5 x 0.5 mm patch of retinal tissue, building new computational models that captured observed network dynamics more accurately than previously used models. His work provided strong evidence for critical phenomena in neural networks, a key theoretical prediction in computational neuroscience that had previously lacked experimental support [2][16].
Amodei received his PhD in 2011 and was awarded the Hertz Thesis Prize for his dissertation, recognizing it as one of the most outstanding doctoral theses completed by a Hertz Fellow that year [17].
| Degree | Institution | Year | Field | Notes |
|---|---|---|---|---|
| High School Diploma | Lowell High School, San Francisco | ~2001 | N/A | USA Physics Olympiad team (2000) |
| B.S. (began at Caltech) | Stanford University | 2006 | Physics | Transferred from Caltech |
| Ph.D. | Princeton University | 2011 | Biophysics / Computational Neuroscience | Hertz Fellowship (2007), Hertz Thesis Prize (2012) |
After completing his doctorate, Amodei undertook postdoctoral research at the Stanford University School of Medicine, working on applications of mass spectrometry to analyze cellular proteomes and search for cancer biomarkers [1]. This biomedical research experience gave him a perspective on applied science that would later inform his thinking about AI's potential to accelerate scientific discovery, a theme he explored at length in his 2024 essay "Machines of Loving Grace."
From November 2014 to October 2015, Amodei worked at Baidu, the Chinese technology company, during a period when Baidu was aggressively expanding its AI capabilities under the leadership of Andrew Ng. Based at Baidu's Silicon Valley AI Lab, Amodei contributed to the development of Deep Speech 2, a speech recognition system that applied deep learning techniques to achieve near-human accuracy across multiple languages. His work at Baidu exposed him to the challenges of deploying deep learning systems at large scale and deepened his understanding of the rapid pace of AI progress [1][3].
After leaving Baidu, Amodei joined Google as a Senior Research Scientist on the Google Brain team. At Google Brain, he worked on research related to deep learning and neural network scaling, gaining experience with the computational resources and organizational structures needed to train increasingly large models. During this period, he co-authored "Concrete Problems in AI Safety" (2016), a widely cited paper that helped establish the field of AI safety as a serious area of technical research. The paper identified five practical problems that arise when deploying machine learning systems in the real world, including avoiding negative side effects, reward hacking, and safe exploration [1][3].
In 2016, Amodei joined OpenAI, the AI research organization originally structured as a nonprofit. He rose to become Vice President of Research, one of the most senior technical leadership positions at the organization [3].
At OpenAI, Amodei oversaw some of the most significant language model research of the era. His team led the development of GPT-2, a large language model released in February 2019 that generated considerable public attention for its ability to produce coherent, human-like text. The release was notable for OpenAI's decision to initially withhold the full model, citing concerns about potential misuse, a decision that sparked debate about responsible AI disclosure [3].
Amodei's team subsequently developed GPT-3, a 175-billion-parameter model released in June 2020 that represented a dramatic leap in capability. GPT-3 demonstrated that scaling up language models could produce emergent abilities not present in smaller models, a finding that profoundly shaped the direction of AI research and contributed to the scaling laws framework that Amodei and his colleagues would continue to develop [3].
Amodei is also credited as a co-inventor of reinforcement learning from human feedback (RLHF), the training technique that proved central to aligning large language models with human preferences and that would later underpin products like ChatGPT [3].
In December 2020, Amodei and several other senior researchers left OpenAI. The departures were driven by disagreements over the organization's post-2019 restructuring into a capped-profit model and its deepening commercial ties with Microsoft. Amodei and his colleagues felt that these changes diluted OpenAI's original commitments to ethical AI development and long-term safety, and that the pace of commercialization was outrunning safety research [4].
The group was particularly concerned about the risks of scaling AI models without robust safety measures in place. They believed that as models grew more powerful, the need for rigorous safety research would become more urgent, not less, and they worried that commercial pressures at OpenAI were pushing in the opposite direction. Amodei later explained his decision publicly, saying he left because he believed a new organization could better balance cutting-edge capability research with genuine safety commitments [4][5].
The departure was not acrimonious. Amodei has spoken respectfully of his time at OpenAI while maintaining that he and his colleagues saw an opportunity to build a different kind of AI company from the ground up.
In 2021, Dario Amodei, Daniela Amodei, and five other former OpenAI employees co-founded Anthropic. The seven co-founders were:
| Co-founder | Role at Anthropic | Previous Role at OpenAI |
|---|---|---|
| Dario Amodei | CEO | Vice President of Research |
| Daniela Amodei | President | Vice President of Safety and Policy |
| Tom Brown | Researcher | Core developer of GPT-3 |
| Jack Clark | Head of Policy (initially) | Policy Director |
| Jared Kaplan | Researcher | Researcher (scaling laws) |
| Sam McCandlish | Researcher | Researcher (scaling laws) |
| Chris Olah | Head of Interpretability | Researcher (interpretability) |
The company was structured as a public benefit corporation (PBC), a legal designation that requires balancing shareholder returns with broader social impact [5]. In addition to the PBC structure, Anthropic established the Long-Term Benefit Trust (LTBT), an independent governance body of five financially disinterested members with authority to select and remove a growing portion of the company's board of directors. The LTBT was designed to ensure Anthropic stays aligned with its safety mission even under commercial pressure or in the event of an IPO [18].
Dario serves as CEO, while Daniela serves as President. From the start, Anthropic positioned itself as a company that would treat safety as a core part of its business strategy, not merely as an afterthought [5].
| Detail | Anthropic |
|---|---|
| Founded | 2021 |
| Co-founders | Dario Amodei (CEO), Daniela Amodei (President), and 5 others |
| Headquarters | San Francisco, California |
| Structure | Public Benefit Corporation with Long-Term Benefit Trust |
| Primary Product | Claude (family of large language models) |
| Valuation (Feb 2026) | $380 billion |
| Annual Revenue (2025) | ~$10 billion |
Anthropic has raised extraordinary amounts of capital as AI competition has intensified. Key funding milestones include:
| Date | Round | Amount | Valuation |
|---|---|---|---|
| 2021-2023 | Early rounds | Multiple rounds | Growing from initial seed |
| September 2023 | Amazon investment | Up to $4 billion | N/A |
| March 2025 | Series E | Undisclosed | $61.5 billion |
| September 2025 | Series F | $13 billion | $183 billion |
| February 2026 | Series G | $30 billion | $380 billion |
The February 2026 round, led by GIC and Coatue with participation from D.E. Shaw Ventures, Dragoneer, Founders Fund, ICONIQ, and MGX, was the second-largest venture funding deal of all time [6]. Anthropic's revenue trajectory has been equally dramatic: from essentially zero at the beginning of 2023, the company reached $1 billion in annualized revenue by early 2025, $5 billion by August 2025, and approximately $10 billion for full-year 2025. By early 2026, annualized revenue had climbed to $14 billion [6].
One of Anthropic's most distinctive technical contributions is Constitutional AI (CAI), a training method that aligns AI systems with a set of principles described in a central document (the "constitution") rather than relying solely on human feedback for every decision [7].
In traditional reinforcement learning from human feedback (RLHF), human raters evaluate model outputs and the model learns to produce responses that humans prefer. Constitutional AI adds a layer of self-supervision: the model is trained to evaluate its own outputs against a written set of principles and to revise its responses accordingly. This approach aims to make alignment more scalable and transparent, since the principles can be publicly stated and debated [7].
Anthropic has published its constitution and has updated it over time. The latest version, as of early 2026, reads less like a compliance checklist and more like guidance for an autonomous agent, reflecting the increasing sophistication of the Claude models it governs [7].
In September 2023, Anthropic introduced its Responsible Scaling Policy (RSP), a framework for managing the risks that emerge as AI models become more capable. Amodei presented the RSP at the UK AI Safety Summit at Bletchley Park in November 2023 [8]. The policy established "AI Safety Levels" (ASLs), analogous to biosafety levels (BSLs), that define the safety measures required before a model of a given capability level can be deployed.
The ASL framework defines four levels:
| Level | Description | Safety Requirements |
|---|---|---|
| ASL-1 | Systems with little to no risk (e.g., a chess engine) | Minimal |
| ASL-2 | Current-generation models with broad capabilities but no catastrophic misuse potential | Standard safety measures |
| ASL-3 | Models that become operationally useful for catastrophic misuse (e.g., CBRN threats) | Significantly enhanced security, monitoring, and deployment restrictions |
| ASL-4 | Models with near-human-level autonomy or that become a primary source of a serious global security threat | Maximum security, potential deployment restrictions |
The core idea is that as models cross capability thresholds, the safety infrastructure surrounding them must scale proportionally. Anthropic committed to not deploying models beyond certain capability levels unless adequate safety measures were in place [8].
However, in February 2026, Anthropic updated its RSP, dropping its pledge not to continue training AI models after they reached certain capability levels unless safety guarantees were adequate. The updated policy replaced the categorical pause trigger with a dual condition requiring both AI race leadership and material catastrophic risk. The company explained that shortcomings in the two-year-old policy could hinder its ability to compete in a rapidly growing AI market [9]. This change drew criticism from some in the AI safety community who viewed it as a retreat from Anthropic's foundational commitments. Amodei acknowledged the tension, saying, "We're under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do that I think we do more than other companies" [10].
Amodei and several of his Anthropic co-founders, particularly Jared Kaplan and Sam McCandlish, have been central figures in developing the theory of neural scaling laws. Their research at OpenAI demonstrated predictable relationships between model size, dataset size, compute budget, and model performance. These scaling laws have become foundational to how the AI industry plans and invests in model training. Amodei has repeatedly stated publicly that scaling laws have not hit a wall and that continued scaling will produce significant capability gains [13].
On October 11, 2024, Amodei published "Machines of Loving Grace," a roughly 15,000-word essay describing how AI could transform the world for the better if development goes well [11]. The title references "All Watched Over by Machines of Loving Grace," a 1967 poem by Richard Brautigan that imagines a pastoral future where technology and nature coexist in harmony.
The essay was notable because Amodei and Anthropic had primarily focused their public communications on AI risks. In "Machines of Loving Grace," Amodei sketched an optimistic scenario in which powerful AI compresses 50 to 100 years of biological and medical progress into 5 to 10 years, a concept he called the "compressed 21st century" [11]. He covered five major areas where AI could produce transformative benefits:
| Area | Amodei's Vision |
|---|---|
| Biology and Health | AI-driven acceleration of drug discovery, cancer treatment, and infectious disease eradication |
| Neuroscience and Mental Health | Better understanding of the brain leading to improved treatments for psychiatric conditions |
| Economic Development | Raising living standards globally, particularly in developing nations |
| Governance and Democracy | AI tools that improve governance, reduce corruption, and strengthen democratic institutions |
| Work and Meaning | Navigating economic disruption while preserving human purpose and dignity |
Amodei emphasized that this optimistic scenario was contingent on getting safety right and distributing AI's benefits broadly rather than concentrating them among a few. The essay was widely discussed in the AI community and beyond, with commentators noting the unusual combination of an AI safety leader making an extended case for AI's upside potential [11].
Amodei followed up with a second major essay, "The Adolescence of Technology," a roughly 20,000-word reflection on where AI stands and where it may be headed [12]. Published alongside Anthropic's co-founder giving pledge in late January 2026, the essay argues that humanity is entering a critical period in which it will be handed "almost unimaginable power" through AI, and that it is deeply unclear whether social, political, and technological systems are mature enough to wield it responsibly.
Amodei identifies five categories of existential risk from powerful AI:
He advocates for a "sober, fact-based" approach to AI governance that avoids both the doomerism of 2023-2024 and the uncritical techno-optimism that has replaced it [12].
On July 25, 2023, Amodei testified before the U.S. Senate Judiciary Committee's Subcommittee on Privacy, Technology and the Law at a hearing titled "Oversight of A.I.: Principles for Regulation." He appeared alongside AI researcher Yoshua Bengio and other experts [19].
In his written and oral testimony, Amodei warned that the medium-term future posed the most "alarming combination of imminence and severity" with respect to AI risks. He focused particularly on the potential for AI to lower barriers to biological weapons development, describing scenarios in which advanced AI could help non-state actors create dangerous pathogens. He also discussed Anthropic's approach to safety, including the use of Constitutional AI to make models less likely to respond to harmful requests, and called for a collaborative approach between AI companies and government regulators [19][20].
In November 2025, the House Homeland Security Committee called Amodei to testify at a December 2025 hearing regarding a Chinese state-sponsored cyber-espionage campaign that had exploited Claude Code. This incident highlighted the dual-use nature of powerful AI systems and the ongoing challenges of preventing misuse by sophisticated adversaries [21].
Amodei's position on AI risk is distinctive for its combination of urgency and pragmatism. He believes that powerful AI systems, potentially approaching or exceeding human-level capabilities in many domains, could arrive within a few years, not decades. In a November 2024 interview on the Lex Fridman Podcast (episode #452, running over five hours), Amodei stated that he expects AI to reach human-level intelligence between 2026 and 2027 based on current trajectories in computing power and data access [22]. He prefers the term "powerful AI" over "artificial general intelligence" (AGI), viewing the latter as too vague.
At the same time, he argues that these risks are manageable with sufficient investment in safety research, thoughtful governance, and responsible corporate behavior. He has described the situation as a "country of geniuses in a datacenter," a metaphor for AI systems that could possess enormous capability but whose values and behaviors must be carefully shaped [12].
He has been critical of both those who dismiss AI risks entirely and those who advocate for halting AI development, arguing instead for a middle path that combines aggressive capability development with equally aggressive safety work.
One area where Amodei has been particularly specific about risks is the potential for AI to enable biological weapons development. As of mid-2025, Anthropic's internal measurements indicated that large language models may already be providing substantial assistance in bioweapon-related areas. This led to Claude Opus 4 being released under Safety Level 3 protections, and Anthropic implemented specialized classifiers to detect and block bioweapon-related outputs, with these classifiers accounting for close to 5% of total inference costs [12].
Under Amodei's leadership, Anthropic has developed and released the Claude family of AI models, which compete with OpenAI's GPT series and Google's Gemini models.
| Model | Release Period | Key Characteristics |
|---|---|---|
| Claude 1 | March 2023 | Initial release, focused on helpfulness and safety |
| Claude 2 | July 2023 | Improved capabilities, longer context window |
| Claude 3 (Haiku, Sonnet, Opus) | March 2024 | Family of models at different capability/cost tiers |
| Claude 3.5 Sonnet | June 2024 | Strong performance at mid-tier pricing |
| Claude 4 (Sonnet, Opus) | 2025 | Next-generation models with expanded capabilities |
Claude models are used by enterprises across industries, with Anthropic deriving roughly 80% of its revenue from business customers [6]. The company has also developed Claude Code, an AI coding assistant, which reached $2.5 billion in annualized revenue by February 2026 [6].
Daniela Amodei, Dario's younger sister, serves as President of Anthropic and handles much of the company's operational, business, and policy work. Before Anthropic, she served as Vice President of Safety and Policy at OpenAI. Before that, she spent a decade at Stripe, the payments company, where she rose to senior roles in finance and operations [15].
The sibling co-founder dynamic is unusual in Silicon Valley. Dario has said that working with his sister brings a level of trust and candor that would be difficult to achieve with a non-family co-founder. Daniela's complementary skill set, focused on operations, policy, and business development, allows Dario to concentrate more heavily on research direction and technical strategy. The two reportedly have a direct communication style with each other, and colleagues have noted that their shared upbringing and family values create an unusually strong foundation for the high-stakes decisions involved in running a frontier AI company [15].
Daniela has played a particularly important role in Anthropic's policy and government relations work, as well as in building the company's operational infrastructure during its period of hypergrowth. A January 2026 CNBC profile described the Amodei siblings as potentially holding "the key to generative AI," highlighting how their combined technical and operational strengths have shaped Anthropic's trajectory [15].
Amodei has described spending up to 40% of his time on company culture rather than products, an unusual allocation for a technology CEO [14]. He has emphasized the importance of building an organizational culture that genuinely prioritizes safety, arguing that safety cannot be bolted on as an afterthought but must be embedded in how the company thinks, hires, and makes decisions.
His leadership style has been characterized as intellectual and deliberate, reflecting his academic background. Colleagues have noted his ability to hold both the technical and strategic dimensions of AI development in mind simultaneously, a skill that has proven essential for navigating the tensions between Anthropic's safety mission and its commercial ambitions [14].
In a February 2026 Fortune profile, Amodei expressed deep discomfort with the concentration of power in the AI industry, noting that a small number of individuals at a handful of companies are making decisions that will affect billions of people. "The thing to worry about is a level of wealth concentration that will break society," he said, pointing to the unprecedented fortunes being created by AI and arguing that the industry has a responsibility to address the inequality it may generate [10][23].
In late January 2026, all seven of Anthropic's co-founders announced a pledge to donate 80% of their personal wealth over time, with the stated goal of combating AI-driven economic inequality [23]. At the time of the announcement, Forbes estimated each co-founder's net worth at approximately $3.7 billion, though subsequent funding rounds likely increased that figure significantly. The pledge could ultimately direct tens of billions of dollars toward philanthropic causes.
Amodei framed the pledge in the context of his essay "The Adolescence of Technology," arguing that the wealth created by AI will be so large and so concentrated that it could destabilize democratic societies unless actively redistributed. Other Anthropic employees have also committed to donating shares, and the company has said it will match those contributions [23].
Amodei has received growing recognition as Anthropic's profile has risen.
| Year | Recognition |
|---|---|
| 2012 | Hertz Thesis Prize for doctoral dissertation [17] |
| 2023 | Testified before U.S. Senate on AI regulation [19] |
| 2024 | Time 100 Most Influential People in AI [24] |
| 2025 | Time 100 Most Influential People in the World [24] |
| 2025 | Time "Architects of AI" (Person of the Year) [24] |
| 2025 | U.S. News Best Leaders (Business category) [25] |
| 2025 | Keynote speaker at Databricks Data + AI Summit |
| 2026 | Forbes Billionaires List (#567, estimated $6.8-7 billion net worth) [26] |
| 2026 | Featured in Fortune, CNN, CNBC profiles |
As of early 2026, Amodei continues to lead Anthropic as CEO during a period of extraordinary growth and intensifying competition. The company closed its $30 billion Series G round in February 2026 at a $380 billion valuation, making it one of the most valuable private companies in the world [6].
Anthropic's annualized revenue reached $14 billion, driven by enterprise adoption of Claude models and the rapid growth of Claude Code [6]. The company has expanded its team significantly, leasing an entire 27-story office tower in downtown San Francisco in January 2026 to accommodate its growth [27]. Anthropic continues to invest heavily in both capability research and safety infrastructure.
Amodei has stated publicly that scaling laws have not hit a wall and that 2026 will see a "radical acceleration" in AI capabilities, with laboratory results that far exceed what the public currently perceives spilling over into real-world applications [13]. He has positioned Anthropic as a company that must compete at the frontier of AI capabilities in order to have a seat at the table when it comes to shaping how those capabilities are governed.
The tension between safety commitments and commercial pressures remains a defining challenge. The 2026 update to the Responsible Scaling Policy drew scrutiny, and Amodei has acknowledged the difficulty of the balancing act. "We're under an incredible amount of commercial pressure," he told Fortune, while maintaining that Anthropic still does more on safety than its competitors [10].
Amodei is known for being relatively private. He lives in San Francisco and has described his interests as spanning science, philosophy, and the long-term future of technology. His personal blog at darioamodei.com hosts his major essays, including "Machines of Loving Grace" and "The Adolescence of Technology" [11][12].