The AI Safety Summit refers to a series of international summits convened by national governments to address the risks and governance challenges posed by advanced artificial intelligence systems. The series began with the Bletchley Park Summit in November 2023 and has continued through events in Seoul, Paris, and New Delhi. These summits represent the most significant multilateral effort to date to coordinate global responses to AI safety concerns, bringing together heads of state, technology executives, researchers, and civil society organizations.
The summit series emerged against a backdrop of rapid advances in large language models and growing alarm among researchers about the potential risks of frontier AI. In the months preceding the first summit, the Center for AI Safety released a statement signed by hundreds of prominent figures warning that "mitigating the risk of extinction from AI should be a global priority," and the Future of Life Institute's open letter calling for a pause on giant AI experiments had drawn over 30,000 signatures [1][2].
The first AI Safety Summit took place on November 1-2, 2023, at Bletchley Park in Milton Keynes, United Kingdom. Bletchley Park was chosen for its historical significance as the site where Alan Turing and other codebreakers deciphered encrypted messages during World War II. The summit was hosted by UK Prime Minister Rishi Sunak, who had positioned Britain as an aspiring global leader on AI governance [3].
The summit brought together representatives from 28 countries and the European Union. Notably, both the United States and China participated, marking a rare instance of cooperation on technology policy between the two rivals. Other attendees included representatives from Brazil, France, Germany, India, Ireland, Israel, Italy, Japan, Kenya, the Kingdom of Saudi Arabia, Nigeria, the Philippines, Singapore, South Korea, and the United Arab Emirates, among others. The European Commission also participated as a separate signatory alongside individual EU member states [3].
Technology leaders in attendance included OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and Meta AI chief Yann LeCun. Academic researchers, civil society groups, and international organizations also attended [4].
The summit's central outcome was the Bletchley Declaration, signed by all 28 countries and the EU on November 1, 2023. The declaration represented the first international agreement on the risks posed by frontier AI. It affirmed that AI should be "designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible" [3].
Key elements of the declaration included:
| Element | Description |
|---|---|
| Shared risk recognition | Acknowledgment that frontier AI poses potentially catastrophic risks, including in areas such as cybersecurity, biotechnology, and disinformation |
| Need for cooperation | Commitment to international collaboration on understanding and mitigating AI risks |
| State-led safety testing | Agreement on the importance of government-led evaluation and testing of frontier AI systems |
| Developer transparency | Recognition that AI developers should be transparent about their safety practices and risk assessments |
| Research collaboration | Commitment to building shared scientific understanding through collaborative research |
The declaration was non-binding, meaning it carried no legal enforcement mechanism. Critics noted this limitation, arguing that voluntary commitments without enforcement teeth would be insufficient to address the pace of AI development [5].
One of the most concrete outcomes of the Bletchley Park Summit was the announcement of the UK AI Safety Institute (AISI), tasked with testing and evaluating frontier AI models. The institute received approximately 100 million GBP in public funding and quickly built one of the world's largest safety evaluation teams. It was the first government body specifically dedicated to evaluating the safety of advanced AI systems before and after deployment [6].
The summit also commissioned Yoshua Bengio, a Turing Award-winning deep learning researcher, to lead the production of a "State of the Science" report on the capabilities and risks of frontier AI, to be delivered ahead of the next summit [4].
Participants agreed on a schedule for future summits: the Republic of Korea would co-host a follow-up event within six months, and France would host the next full in-person summit approximately one year later. Several countries also announced bilateral AI safety agreements and research partnerships [4].
The AI Seoul Summit was held on May 21-22, 2024, co-hosted by the Republic of Korea and the United Kingdom. The event was structured as both a virtual leaders' session and an in-person ministerial forum, reflecting the compressed timeline between the Bletchley and Seoul events [7].
The most significant outcome of the Seoul Summit was the Frontier AI Safety Commitments, a set of voluntary pledges signed by 16 leading AI companies. Signatories included OpenAI, Anthropic, Google DeepMind, Meta, Microsoft, Amazon, Inflection AI, Mistral AI, Samsung Electronics, NAVER, Cohere, AI21 Labs, G42, Technology Innovation Institute, xAI, and Zhipu AI [8].
The commitments focused on three main pillars:
| Pillar | Key Requirements |
|---|---|
| Risk assessment | Companies must evaluate risks throughout the AI lifecycle, define thresholds for severe risks, and implement mitigation strategies |
| Accountability | Companies must develop internal governance structures, assign clear roles for safety oversight, and allocate resources to uphold commitments |
| Transparency | Companies must provide public updates on their safety practices and involve external actors in risk assessment; signatories were required to publish safety frameworks before the Paris Summit in February 2025 |
The commitments required each signatory to publish a "safety framework focused on severe risks" before the next AI summit. This requirement later prompted several companies to release or update their safety policies, including Anthropic's Responsible Scaling Policy, OpenAI's Preparedness Framework, and Google DeepMind's Frontier Safety Framework [8].
Twenty-seven nations signed the Seoul Declaration, committing to develop shared risk thresholds for frontier AI development and deployment. The declaration included agreement on defining when model capabilities could pose "severe risks" without appropriate safeguards [7].
Ten countries agreed to establish an international network of AI safety institutes with the goal of accelerating AI safety science. The network aimed to forge a common understanding of AI safety, align research efforts, and establish shared standards and testing methodologies. Participating countries included the UK, US, Japan, South Korea, Singapore, Canada, France, and others [7].
The UK AI Safety Institute also announced 8.5 million GBP in research funding for systemic AI safety work as part of the summit outcomes [9].
The AI Action Summit ("Sommet pour l'action sur l'intelligence artificielle") was held on February 10-11, 2025, at the Grand Palais in Paris, France. Hosted by President Emmanuel Macron, the event marked a significant shift in framing: while earlier summits centered primarily on safety risks, the Paris summit broadened its scope to include AI's potential for positive impact, economic development, and sustainability [10].
The Paris summit was substantially larger than its predecessors, attracting representatives from over 100 countries and involving more than 1,000 participants including heads of state, technology executives, researchers, and civil society leaders. Co-chairs included President Macron and Indian Prime Minister Narendra Modi [10].
The summit produced a joint statement titled "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet." The statement called for AI policies that are "open, inclusive, transparent, ethical, safe, secure and trustworthy." It was signed by 61 nations and international organizations, including France, China, India, Germany, Indonesia, Italy, the UAE, the European Union, and the African Union [11].
However, two notable refusals drew significant attention: the United States and the United Kingdom both declined to sign.
US Vice President JD Vance, who attended the summit, argued that excessive regulation could "kill a transformative industry" and criticized European regulatory frameworks for imposing burdensome compliance costs. A spokesperson for UK Prime Minister Keir Starmer stated that the declaration "didn't provide enough practical clarity on global governance" and did not "sufficiently address harder questions around national security" [11][12].
The Paris summit produced several major initiatives:
| Initiative | Description |
|---|---|
| Current AI Foundation | French government endowed $400 million for a new foundation to create AI "public goods" including datasets and open-source tools |
| InvestAI | European Commission President Ursula von der Leyen launched a 200 billion EUR initiative, including 20 billion EUR for four AI "gigafactories" to train large models |
| ROOST Coalition | Google, Discord, OpenAI, and Roblox launched the Robust Open Online Safety Tools initiative to develop free, open-source tools for detecting child sexual abuse material |
| Private investment | President Macron announced approximately 110 billion EUR in private investment pledges for France's AI sector, including 30-50 billion EUR from the UAE and 20 billion EUR from Brookfield Corporation |
The International AI Safety Report, commissioned at Bletchley Park and led by Yoshua Bengio, was published on January 29, 2025, just ahead of the summit. The report assessed the risks and threats posed by general-purpose AI and was discussed at the summit as part of its "Trust in AI" pillar [10].
The fourth summit in the series, the AI Impact Summit, was held at Bharat Mandapam in New Delhi, India, from February 16-21, 2026. Hosted by India, the event continued the trend of expanding participation, with approximately 600,000 in-person attendees and over 900,000 cumulative virtual viewers [13].
The India AI Impact Summit Declaration was endorsed by 92 countries and international organizations, the largest number of signatories in the summit series. Key outcomes included:
| Outcome | Details |
|---|---|
| New Delhi Frontier AI Impact Commitments | 13 leading global and Indian frontier model developers pledged to promote trustworthy and inclusive AI deployment |
| Global AI Impact Commons | A voluntary initiative featuring 80+ impact stories across 30+ countries for sharing and replicating successful AI use cases |
| Infrastructure expansion | India announced an additional 20,000 GPUs for its sovereign compute capacity, adding to 38,000+ GPUs already provisioned under the IndiaAI Mission |
| Investment commitments | Over $200 billion in AI-related investment commitments, with Reliance Industries pledging $110 billion over seven years |
| Guinness World Record | India achieved a record for the most pledges received for an AI responsibility campaign in 24 hours, with over 250,000 validated pledges |
The following table summarizes the key characteristics and outcomes of each summit in the series:
| Feature | Bletchley Park (Nov 2023) | Seoul (May 2024) | Paris (Feb 2025) | New Delhi (Feb 2026) |
|---|---|---|---|---|
| Host country | United Kingdom | South Korea / UK | France | India |
| Host leader | PM Rishi Sunak | Presidents Yoon and co-hosted with UK | President Emmanuel Macron | PM Narendra Modi |
| Countries participating | 28 + EU | 27+ nations | 100+ countries | 92+ signatories |
| Primary declaration | Bletchley Declaration | Seoul Declaration | Statement on Inclusive and Sustainable AI | India AI Impact Summit Declaration |
| Key focus | AI safety risks | Frontier AI commitments | AI for public good and sustainability | AI impact and inclusive development |
| Corporate commitments | N/A | 16 companies signed Frontier AI Safety Commitments | Private investment pledges of ~110B EUR | 13 frontier model developer commitments |
| Institutional outcomes | UK AI Safety Institute announced | International network of AI safety institutes | Current AI Foundation ($400M); InvestAI (200B EUR) | Global AI Impact Commons; infrastructure expansion |
| US/China participation | Both signed declaration | Both participated | US and UK declined to sign declaration | Broader participation |
| Binding commitments | None (voluntary) | Voluntary | Voluntary | Voluntary |
The AI Safety Summit series represents the first sustained multilateral effort to coordinate international responses to advanced AI risks. Its significance can be assessed across several dimensions.
Before the Bletchley Park Summit, there was no consensus international language for discussing AI risks. The Bletchley Declaration established shared terminology around "frontier AI," "severe risks," and the responsibilities of developers and governments. This common vocabulary has since been adopted in national policy documents, corporate safety frameworks, and academic research worldwide [3].
The summits catalyzed the creation of new institutions. The UK AI Safety Institute (later renamed the AI Security Institute) became the first government body dedicated to evaluating frontier AI systems. The international network of AI safety institutes, launched at Seoul, has expanded to include bodies in Japan, Singapore, Canada, and other nations. The US AI Safety Institute was created within NIST following President Biden's executive order on AI, though it was later reorganized and renamed the Center for AI Standards and Innovation (CAISI) under the Trump administration [6][14].
A notable trend across the summit series has been the gradual broadening of focus from pure AI safety concerns to wider questions about AI's economic and social impact. The Bletchley Park Summit was narrowly focused on existential and catastrophic risks from frontier AI. By the Paris summit, the agenda had expanded to encompass AI for public good, sustainability, digital divides, and market concentration. The New Delhi summit continued this trajectory with its emphasis on inclusive development and sovereign AI infrastructure [10][13].
Some observers have praised this broadening as a necessary recognition that AI governance must address more than worst-case scenarios. Others, including Anthropic CEO Dario Amodei, have criticized it as a dilution of focus, arguing that the Paris summit in particular was a "missed opportunity" for meaningful safety commitments [10].
The summit series has faced several criticisms. All declarations and commitments have been voluntary and non-binding, lacking enforcement mechanisms. The refusal of the US and UK to sign the Paris declaration raised questions about the durability of the consensus achieved at Bletchley Park. Critics have also noted that the summits have disproportionately featured perspectives from wealthy nations and large technology companies, with insufficient representation from the Global South, civil society, and communities most affected by AI deployment [11].
The gap between voluntary commitments and actual implementation remains a concern. While companies signed the Seoul Frontier AI Safety Commitments, monitoring compliance with these pledges has proven difficult, and no formal accountability mechanism exists [8].
Switzerland has announced it will host a follow-on summit in Geneva in 2027. The United Nations is also planning its first global forum on AI, scheduled for July 2026. These events suggest that the summit series will continue as a fixture of international AI governance, even as questions persist about its effectiveness in translating declarations into concrete regulatory outcomes [15].
As of early 2026, the AI Safety Summit series has established itself as the primary venue for international AI governance discussions. The series has grown from 28 country signatories at Bletchley Park to 92 at New Delhi, reflecting expanding global engagement with AI governance issues. However, the series faces headwinds. The US withdrawal from the Paris declaration, combined with the Trump administration's deregulatory stance on AI, has introduced uncertainty about American participation in future multilateral AI safety efforts. The UK's parallel refusal to sign in Paris further complicated the picture, given that the UK originated the summit series [12].
The institutional infrastructure created by the summits, particularly the network of AI safety institutes, continues to operate and expand. The International AI Safety Report published ahead of the Paris summit has become a reference document for policymakers worldwide. And the corporate safety frameworks prompted by the Seoul commitments have introduced a degree of standardization to how frontier AI companies communicate about risk [8].
Whether these voluntary, non-binding mechanisms can keep pace with the rapid advancement of AI capabilities remains the central question facing the summit process.