The Seoul Declaration is the short name for the Seoul Declaration for Safe, Innovative and Inclusive AI, a non-binding international statement adopted on 21 May 2024 by world leaders attending the virtual leaders' session of the AI Seoul Summit. It was the headline diplomatic outcome of the second event in the AI Safety Summit series, co-hosted by the United Kingdom and the Republic of Korea, and was accompanied on the same day by the Seoul Statement of Intent toward International Cooperation on AI Safety Science. On the second day, 22 May 2024, ministers from 27 countries and the European Union adopted a separate Seoul Ministerial Statement covering similar ground in greater detail. Alongside these government documents, 16 frontier AI developers signed a parallel set of voluntary corporate pledges known as the Frontier AI Safety Commitments.
The Seoul Declaration is widely treated as the political follow-up to the Bletchley Declaration of November 2023. Where Bletchley established a shared international vocabulary around frontier AI risk, Seoul tried to operationalise that vocabulary by linking it to corporate safety frameworks, a network of national AI safety institutes, and an emerging scientific report on the safety of advanced AI led by Yoshua Bengio. The declaration itself is short, but the package adopted at Seoul is often read as the moment when the post-Bletchley settlement began to harden into something resembling an international regime, however thin and voluntary.
The declaration was not signed by every country present at the summit. Only 10 countries plus the European Union endorsed the leaders-level Seoul Declaration on 21 May. The broader Ministerial Statement on 22 May added 17 more countries, bringing the total to 27 plus the EU. The discrepancy reflects the summit's split format: a small, virtual leaders' session followed by a larger, in-person digital ministers' meeting. Notably, China was not a Seoul Declaration signatory, despite its presence at Bletchley Park.
The first AI safety summit was held at Bletchley Park in the United Kingdom on 1 and 2 November 2023, hosted by then UK Prime Minister Rishi Sunak. The closing communique, the Bletchley Declaration, was endorsed by 28 countries plus the European Union, including the United States, China, the United Kingdom, France, Germany, Japan, India, Brazil, Saudi Arabia, the United Arab Emirates and others. It affirmed that frontier AI should be "designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible" and accepted the principle that the most advanced models could pose catastrophic risks. The declaration also fixed a follow-up calendar: the Republic of Korea agreed to co-host an interim summit roughly six months later, with France hosting a full successor summit in early 2025.[1]
In the months between Bletchley and Seoul, the policy environment shifted in three ways that shaped what the Seoul package would contain.
First, the United Kingdom and the United States both stood up dedicated technical bodies. The UK AI Safety Institute, announced by Sunak at Bletchley, began running pre-deployment evaluations of frontier models in early 2024. The US AI Safety Institute, established within NIST under President Joe Biden's executive order on AI, formally launched in February 2024. Both institutes were already publishing or co-signing memoranda of understanding with model developers by the time the Seoul Summit opened.[2]
Second, the leading frontier developers had begun to publish their own internal safety frameworks. Anthropic released the first version of its Responsible Scaling Policy in September 2023. OpenAI released its Preparedness Framework in December 2023. Google DeepMind was working on what would become the Frontier Safety Framework, eventually published just after the Seoul Summit. These documents shared a common idea: a developer pre-commits to capability evaluations and to stop or constrain training and deployment when models cross defined thresholds for catastrophic risk.
Third, the European Union finalised the EU AI Act in May 2024, with provisions on general-purpose AI models including model evaluation, systemic-risk assessment and serious-incident reporting. Although the act would not enter full force until 2025, its presence shaped the language used at Seoul and gave European participants a domestic legal framework that the rest of the package had to be compatible with.
By May 2024, in other words, there was a recognisable shape to international frontier AI governance: voluntary developer pledges, government safety institutes, an evolving regulatory layer, and a multilateral declaratory layer. The Seoul Summit was the first attempt to weave these strands into a single political moment.
The AI Seoul Summit was held on 21 and 22 May 2024. It used a two-track format that the UK and South Korean co-hosts had agreed on in late 2023 to keep the event manageable while still maintaining heads-of-government attention.
| Day | Date | Format | Co-chairs | Document adopted |
|---|---|---|---|---|
| Day 1 | 21 May 2024 | Virtual leaders' session | UK PM Rishi Sunak and ROK President Yoon Suk Yeol | Seoul Declaration; Seoul Statement of Intent on AI Safety Science |
| Day 2 | 22 May 2024 | In-person ministerial meeting in Seoul | UK Technology Secretary Michelle Donelan and ROK Minister of Science and ICT Lee Jong-Ho | Seoul Ministerial Statement |
The leaders' session was held by video link in line with diplomatic practice for short follow-up summits. President Yoon spoke from Seoul. Other leaders, including Sunak, Italian Prime Minister Giorgia Meloni, Canadian Prime Minister Justin Trudeau, German Chancellor Olaf Scholz, French President Emmanuel Macron, Japanese Prime Minister Fumio Kishida, Australian Prime Minister Anthony Albanese, Singaporean Prime Minister Lawrence Wong and US Vice President Kamala Harris (representing President Biden), joined remotely. European Commission President Ursula von der Leyen represented the EU.[3]
The ministerial day was held in person at a venue in Seoul. It included a broader set of countries and a separate "AI Global Forum" hosted by the South Korean government, which featured industry executives and civil society panels.
The leaders' session was paired with a closed industry roundtable that included senior figures from most of the major frontier developers, among them Sam Altman (OpenAI), Dario Amodei (Anthropic), Demis Hassabis (Google DeepMind), Brad Smith (Microsoft), Mark Zuckerberg (Meta) and Lee Jae-yong (Samsung). Elon Musk attended in his capacity as head of xAI. Naver, the Korean search giant, was represented at the host-country level. The presence of these executives was politically important because it allowed the host governments to announce the corporate Frontier AI Safety Commitments alongside the inter-governmental declaration.[4]
The Seoul Declaration is short by international agreement standards, running to nine numbered paragraphs. The text was published in parallel by the UK Government, the Korean presidential office, the Australian Department of Industry, the Office of the Prime Minister of Canada and others, with identical wording.
The declaration opens by affirming the leaders' "common dedication to fostering international cooperation and dialogue on artificial intelligence (AI) in the face of its unprecedented advancements and the impact on our economies and societies." It then sets out three interlocking goals which become the spine of the rest of the document: AI safety, AI innovation and AI inclusivity. The leaders declare that these goals are "inter-related" and "need to be addressed together through international cooperation and dialogue."[5]
A central paragraph picks up the language Bletchley used about developer responsibility:
We recognize the particular responsibility of organizations developing and deploying frontier AI, and, in this regard, note the Frontier AI Safety Commitments published on 21 May 2024.
This sentence is doing real work. It is the formal hinge between the inter-governmental track and the corporate pledges, and it is also the closest the Seoul Declaration comes to making any specific demand of any specific actor. A separate paragraph commits the signatory governments to "foster" the development and deployment of safe AI by, among other things, supporting the work of AI safety institutes and the production of an international scientific report on the safety of advanced AI.
The innovation and inclusivity portions of the text are vaguer. They reference the OECD AI Principles, the G7 Hiroshima Process International Code of Conduct for Advanced AI Systems, the Global Partnership on AI, the Council of Europe's Framework Convention on AI, and the UN's work on AI for sustainable development. The leaders commit to "active multi-stakeholder collaboration" with the private sector, academia and civil society.[5]
The declaration is not a treaty and contains no enforcement provisions. It does not set numerical thresholds, define "frontier AI" or specify what counts as a "severe risk". Those tasks are pushed forward to the corporate Frontier AI Safety Commitments and to the work of the AI safety institute network. Critics of the document have generally focused on this point: the declaration's principal job is to legitimise the rest of the Seoul package rather than to impose obligations of its own.
Alongside the declaration, the same 11 leaders issued the Seoul Statement of Intent toward International Cooperation on AI Safety Science. This shorter document is sometimes treated as an annex to the declaration, but it is signed separately and has its own number in the UK and Korean publication systems.[6]
The statement's central commitment is to support the creation of, and cooperation among, national institutions devoted to the technical evaluation of frontier AI. It says:
We commend the collective work to create or expand public and/or government-backed institutions, including AI Safety Institutes, that facilitate AI safety research, testing and/or developing guidance to advance AI safety for commercially and publicly available AI systems.
It then articulates the leaders' "shared ambition to develop an international network among key partners to accelerate the advancement of the science of AI safety." The countries that endorsed the statement are the same group that signed the leaders' declaration: Australia, Canada, the EU, France, Germany, Italy, Japan, the Republic of Korea, Singapore, the United Kingdom and the United States. This commitment was the seed of what became, later in 2024, the International Network of AI Safety Institutes, formally launched in San Francisco in November 2024 with 10 founding members.[7]
The statement also flags that signatories will share information about "models, including their capabilities, limitations, and risks as appropriate" and will work on the "monitoring of AI harms." These are weak obligations on paper, but they provide a political mandate for the institutes that did exist (the UK and US bodies in particular) to expand cooperation, and they acknowledge in print that AI safety is best treated as a scientific problem rather than only a regulatory one.
The Seoul Ministerial Statement was adopted on 22 May 2024 by digital and technology ministers from 27 countries and the European Union, a wider group than the leaders' session. The participants were Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, the Republic of Korea, Rwanda, the Kingdom of Saudi Arabia, the Republic of Singapore, Spain, Switzerland, Türkiye, Ukraine, the United Arab Emirates, the United Kingdom, the United States of America and the European Union.[8]
The ministerial document is more concrete than the leaders' declaration. It explicitly recognises that some frontier AI capabilities, "unless adequately mitigated," could pose severe risks, including:
It also signs ministers up to develop, for the first time as a multilateral group, "shared risk thresholds" for frontier AI. This is significant because it commits a much wider set of governments, including major emerging economies such as India, Indonesia, the UAE and Saudi Arabia, to the underlying frontier-risk frame, even though most of those governments did not sign the leaders' declaration.
In parallel, the statement discusses innovation policy and inclusivity in ways the leaders' document did not. Ministers commit to nurturing access to compute, data and talent for small and medium-sized enterprises, start-ups and academic researchers, and to working on workforce and education impacts. They also support efforts to involve the Global South more substantively in international AI policy, with explicit references to Kenya, Nigeria, Rwanda and other African signatories.[8]
The most concrete and most discussed product of the Seoul Summit was a separate document, the Frontier AI Safety Commitments, signed on 21 May 2024 by 16 frontier AI developers. The text was negotiated by the UK and Korean governments with the Frontier Model Forum and individual companies, and is published on GOV.UK.[9]
The initial signatories span eight headquarters jurisdictions and reflect a deliberate effort to include companies from the United States, China, the United Arab Emirates, France, Canada, the United Kingdom, the Republic of Korea and a Singapore-domiciled developer headquartered in Israel. The list is set out below.
| # | Company | Headquarters | Notes |
|---|---|---|---|
| 1 | Amazon | United States | Investor in Anthropic; develops the Titan and Nova model families |
| 2 | Anthropic | United States | Operator of the Claude model family; published the first Responsible Scaling Policy |
| 3 | Cohere | Canada | Toronto-headquartered enterprise LLM developer |
| 4 | Google / Google DeepMind | United States / United Kingdom | DeepMind is the London-based research arm; the commitment was signed at the Google level |
| 5 | G42 | United Arab Emirates | Abu Dhabi-based AI holding group; partner of Microsoft |
| 6 | IBM | United States | Long-standing AI research firm; developer of the Granite model family |
| 7 | Inflection AI | United States | Pi chatbot maker; most of its team had moved to Microsoft AI by the time of signing |
| 8 | Meta | United States | Developer of the Llama open-weights model series |
| 9 | Microsoft | United States | Major investor in OpenAI; developer of Phi and other models |
| 10 | Mistral AI | France | Paris-based open-weights specialist |
| 11 | Naver | Republic of Korea | Korean web portal and HyperCLOVA X model developer |
| 12 | OpenAI | United States | Developer of the GPT and o-series models |
| 13 | Samsung Electronics | Republic of Korea | Major chip and consumer electronics maker; Gauss model developer |
| 14 | Technology Innovation Institute | United Arab Emirates | Abu Dhabi research body; publisher of the Falcon open-weights series |
| 15 | xAI | United States | Elon Musk's AI company; developer of Grok |
| 16 | Zhipu AI | China | Beijing-based developer of the GLM and ChatGLM model families |
Zhipu AI is the only Chinese signatory. The Chinese government did not sign either the leaders' declaration or the ministerial statement, so Zhipu's participation is sometimes read as a private-sector workaround, although Zhipu has close ties to Tsinghua University and is treated by US authorities as a Chinese national champion.
Four additional companies later joined the same set of commitments after the Seoul Summit: 01.AI, Magic, MiniMax and NVIDIA, bringing the eventual total to 20. Some accounts of the summit therefore refer to "20 companies" rather than the original 16.[10]
The Frontier AI Safety Commitments document is organised around three numbered Outcomes (I, II, III), each broken down into specific commitments numbered with Roman numerals from i to viii. The structure deliberately mirrors the language used in existing developer policies such as Anthropic's Responsible Scaling Policy, OpenAI's Preparedness Framework and Google DeepMind's Frontier Safety Framework, but it is also generic enough that companies without published frameworks could sign it.[9]
The substantive commitments are summarised below.
| Outcome | # | Commitment |
|---|---|---|
| I. Risk identification and mitigation | i | Assess risks posed by frontier models or systems across the AI lifecycle, including before deployment and, as appropriate, before and during training. |
| ii | Set out thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable. | |
| iii | Articulate how risk mitigations will be identified and implemented to keep risks within defined thresholds. | |
| iv | Set out explicit processes they intend to follow if their model or system poses risks that meet or exceed the pre-defined thresholds, including not developing or deploying a model or system at all if mitigations cannot keep risks below the thresholds. | |
| v | Continually invest in advancing their ability to implement commitments i to iv, including risk assessment, identification, evaluation, mitigation and monitoring. | |
| II. Accountability | vi | Adhere to the commitments outlined in I to V, including by developing and continuously reviewing internal accountability and governance frameworks and assigning roles, responsibilities and sufficient resources to do so. |
| III. Transparency | vii | Provide public transparency on the implementation of the above commitments (I to VI), except insofar as doing so would increase risk or divulge sensitive commercial information disproportionate to the societal benefit. |
| viii | Explain how, if at all, external actors, such as governments, civil society, academics and the public, are involved in the process of assessing the risks of their AI models and systems, the adequacy of their safety framework, and their adherence to that framework. |
The single most consequential clause in the document is commitment iv, the explicit reference to not developing or deploying a model if risks cannot be brought below pre-defined thresholds. This is the moment the Seoul Summit imported the central idea of the Responsible Scaling Policy into a multilateral text. It does not commit any particular developer to any particular threshold, but it asks every signatory to commit, in writing, to the basic principle that there are some capabilities they will not ship without adequate mitigations.
The operative deadline in the document is the request that signatories "publish a safety framework focused on severe risks" before the next AI summit, scheduled for Paris in February 2025. This framing turned the Seoul Commitments into a tracking exercise: in the nine months between Seoul and Paris, observers including AILabWatch, METR and the Frontier Model Forum tracked which signatories had published frameworks and which had not.
Most of the major developers either had a framework already or used the Seoul deadline as a forcing function to publish one.
| Developer | Framework | Date |
|---|---|---|
| Anthropic | Responsible Scaling Policy v1.0; v2.0 published October 2024 | September 2023 / October 2024 |
| OpenAI | Preparedness Framework; v2 in April 2025 | December 2023 |
| Google DeepMind | Frontier Safety Framework | May 2024 |
| Microsoft | Microsoft Frontier Governance Framework | September 2024 |
| Meta | Frontier AI Framework | February 2025 |
| G42 | Frontier AI Safety Framework | February 2025 |
| Cohere | Secure AI Frontier Model Framework | February 2025 |
| Naver | AI Safety Framework | August 2024 |
| xAI | Risk Management Framework | February 2025 |
| Amazon | Frontier Model Safety Framework | February 2025 |
| NVIDIA (post-Seoul addition) | Frontier AI Safety Framework | February 2025 |
| Magic (post-Seoul addition) | Safety policy | February 2025 |
By the Paris AI Action Summit in February 2025, METR counted 12 signatories that had published a framework meeting the Seoul standard, with the rest either delayed or not yet committed publicly.[11]
The substance of these frameworks is broadly consistent. Each one names a small number of severe-risk categories (typically chemical, biological, radiological and nuclear weapons uplift; offensive cyber operations; large-scale persuasion; autonomous AI research and development), defines capability thresholds at which additional safeguards are required, prescribes evaluations against those thresholds, and specifies model-weight security and deployment mitigations to apply when thresholds are crossed. The Seoul Commitments did not write these details into the multilateral text, but they did make them part of the public record by linking signing to publication.
The leaders' declaration was signed by 10 countries plus the European Union. The ministerial statement was signed by an additional 17 countries, bringing the broader Seoul package to 27 countries plus the EU. The two lists are summarised below.
| Country / entity | Region | Seoul Declaration (Leaders, 21 May) | Ministerial Statement (22 May) |
|---|---|---|---|
| Australia | Asia-Pacific | Yes | Yes |
| Canada | Americas | Yes | Yes |
| Chile | Americas | No | Yes |
| European Union | Europe | Yes | Yes |
| France | Europe | Yes | Yes |
| Germany | Europe | Yes | Yes |
| India | South Asia | No | Yes |
| Indonesia | Asia-Pacific | No | Yes |
| Israel | Middle East | No | Yes |
| Italy | Europe | Yes | Yes |
| Japan | Asia-Pacific | Yes | Yes |
| Kenya | Africa | No | Yes |
| Mexico | Americas | No | Yes |
| Netherlands | Europe | No | Yes |
| New Zealand | Asia-Pacific | No | Yes |
| Nigeria | Africa | No | Yes |
| Philippines | Asia-Pacific | No | Yes |
| Republic of Korea | Asia-Pacific | Yes | Yes |
| Rwanda | Africa | No | Yes |
| Saudi Arabia | Middle East | No | Yes |
| Singapore | Asia-Pacific | Yes | Yes |
| Spain | Europe | No | Yes |
| Switzerland | Europe | No | Yes |
| Türkiye | Eurasia | No | Yes |
| Ukraine | Europe | No | Yes |
| United Arab Emirates | Middle East | No | Yes |
| United Kingdom | Europe | Yes | Yes |
| United States | Americas | Yes | Yes |
Four states that signed the Bletchley Declaration in 2023 did not appear at Seoul: most prominently China, but also Brazil and Ireland. Their absence from the Seoul Declaration is one of the reasons commentators have argued that Seoul represents a narrower, more US- and EU-aligned consensus than Bletchley, even as it adds substantive content.
The Seoul Declaration is best read against its predecessor. The Bletchley Declaration was the first international document on frontier AI risk; the Seoul Declaration was the first attempt to attach concrete commitments to that frame.
| Dimension | Bletchley Declaration (Nov 2023) | Seoul Declaration (May 2024) |
|---|---|---|
| Host(s) | United Kingdom | United Kingdom and Republic of Korea |
| Setting | In-person, two-day summit at Bletchley Park | Virtual leaders' session (Day 1) plus in-person ministerial in Seoul (Day 2) |
| Government signatories | 28 countries plus the EU | 10 countries plus the EU (leaders); 27 countries plus the EU (ministers) |
| Includes China | Yes | No |
| Core conceptual contribution | Recognition of frontier AI risk as an international concern | Linkage of risk recognition to corporate safety frameworks and AI safety institutes |
| Corporate commitments | None directly attached | 16 companies sign Frontier AI Safety Commitments on the same day |
| Treatment of severe risks | General language about catastrophic risks | Specific reference to CBRN uplift, autonomous replication, manipulation; commitment to develop shared thresholds |
| Institutional output | Announcement of the UK AI Safety Institute; commission of the State of the Science report | Endorsement of Statement of Intent on AI Safety Science; precursor to the International Network of AI Safety Institutes |
| Treatment of innovation and inclusivity | Mostly absent | Promoted to a co-equal goal alongside safety |
| Successor summit | Seoul (May 2024) | Paris AI Action Summit (Feb 2025) |
The shift from Bletchley to Seoul is sometimes described as a shift from "declaratory" to "operational" diplomacy. It is also a shift in tone. Bletchley was almost entirely a safety document. Seoul deliberately broadened the agenda to include innovation and inclusivity, partly at the request of the South Korean co-host, partly to keep emerging economies on board, and partly because the policy weather had changed: by mid-2024, governments were less worried about the political risk of being seen as anti-innovation.
Reception of the Seoul Declaration and the broader Seoul package was mixed. The package was widely seen as a useful institutional consolidation of the Bletchley moment, but several specific criticisms recurred across academic, civil society and industry commentary.
More than 100 civil society organisations and trade unions signed an open letter ahead of the summit branding the event a "missed opportunity". The letter argued that the summit's invite list was too narrow, that workers and people directly affected by AI deployment were excluded, and that the agenda was over-weighted toward speculative future risks at the expense of documented present harms such as algorithmic discrimination, surveillance and labour displacement. Some signatories also argued that the voluntary nature of the corporate commitments made them effectively unaccountable.[12]
Mozilla, Access Now and Article 19 made similar points in subsequent commentary, noting that the absence of binding rules on transparency, evaluation and incident reporting meant that the Seoul package would be hard to assess in retrospect. The Centre for Emerging Technology and Security at the Alan Turing Institute argued that the summit had achieved more than its critics gave it credit for, but agreed that civil society participation had been "limited" and that the institutional outputs leaned heavily on a handful of large jurisdictions and companies.[13]
The Centre for Strategic and International Studies (CSIS) characterised Seoul as a "mini summit" whose principal achievement was operationalising Bletchley by scaling up the network of national AI safety institutes. CSIS noted the absence of China from the leaders' declaration and warned that without Chinese participation, the summit series risked sliding from a global to a Western-aligned forum.[14]
The Centre for AI Safety, the Future of Life Institute and other safety-focused organisations broadly welcomed the Frontier AI Safety Commitments as the first multilateral text to embed the idea of capability thresholds and pre-committed deployment policies. They also pressed for stronger language on third-party evaluation and on what should happen if a signatory missed the Paris deadline.
Writing in Nature and similar venues, several authors observed that the Seoul Declaration had pushed the field toward something like "safety case" reasoning, where developers articulate in advance the conditions under which they would not deploy a system. Critics pointed out that the same authors had been making this argument for some time and that the Seoul text mostly catches up rather than breaking new ground.
The 16 corporate signatories framed the Frontier AI Safety Commitments as a continuation of work they were already doing. Anthropic's Dario Amodei publicly praised the inclusion of explicit thresholds. OpenAI's Sam Altman emphasised the importance of international coordination. Mistral AI, an open-weights specialist, signed the document but used the moment to argue that open-weights releases should not be treated as inherently more dangerous than closed releases, a position that would resurface as a flashpoint at the Paris summit.
A recurring industry concern was that the document's transparency clause, particularly commitment vii, could be read either narrowly (publish a framework once) or broadly (publish ongoing evaluation results). The Seoul text deliberately leaves this open.
UK Technology Secretary Michelle Donelan called the Seoul outcomes "the beginning of Phase Two of our AI safety agenda". South Korean President Yoon framed the event as the moment Korea joined the front rank of AI governance, and announced that Korea would establish its own AI safety institute. The United States, represented by Vice President Harris, used the summit to reiterate the work of the new US AI Safety Institute under NIST and to emphasise links to President Biden's AI executive order. The European Commission positioned the Seoul package as compatible with the EU AI Act, which had been finalised earlier in May 2024.[15]
The most-discussed absence was China. Beijing had been a Bletchley signatory and had attended the leaders' session there, but it was not invited to the Seoul leaders' session and did not sign the ministerial statement. Chinese state media coverage was muted. The decision to omit China was a deliberate choice by the co-hosts, made on the rationale that Seoul would focus on "like-minded" democracies; commentators have since debated whether that was the right call. The presence of Zhipu AI as a corporate signatory partly mitigates the optics, but the underlying gap between the multilateral process and Chinese AI governance widened at Seoul rather than narrowing.
The Seoul Declaration has had three main downstream effects.
The Statement of Intent issued alongside the declaration was the political seed of the International Network of AI Safety Institutes, formally launched in San Francisco on 20 to 21 November 2024 by 10 founding members: Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom and the United States. The network produces joint testing exercises, including a coordinated evaluation of frontier models in October and November 2024, and acts as a venue for sharing methodologies and red-team findings between government bodies. The UK and US institutes are the largest members, but the network has expanded to include the Korean AI Safety Institute and bodies in Japan and Singapore.[16]
By forcing the Frontier AI Safety Commitments through a multilateral signature, Seoul effectively standardised the structure of corporate safety policy. The publish-by-Paris deadline produced a wave of new or updated frameworks in late 2024 and early 2025: Anthropic's RSP v2.0, OpenAI's Preparedness Framework v2, Google DeepMind's updated Frontier Safety Framework, Microsoft's Frontier Governance Framework, Meta's Frontier AI Framework, and similar documents from G42, Cohere, NVIDIA, Naver and others. METR and AILabWatch began regular comparative analyses of these frameworks, treating Seoul as the baseline.[11]
The European Union's General-Purpose AI Code of Practice, finalised in August 2025 to support the EU AI Act's obligations on general-purpose AI providers, references the Frontier AI Safety Commitments and the Seoul package as compatible reference points. So does the US AI Safety Institute's voluntary memoranda of understanding with Anthropic and OpenAI, signed in August 2024.
The Seoul Declaration committed signatories to a successor summit in France within roughly nine months. The Paris AI Action Summit was held on 10 and 11 February 2025 at the Grand Palais. Where Seoul was a safety-led summit with innovation and inclusivity bolted on, Paris reversed the emphasis. Its joint statement, the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, was signed by more than 60 governments and international organisations, but the United States and the United Kingdom both declined to sign, citing insufficient attention to national security, safety and global governance.[17]
The Paris pivot was widely read as a partial walking-back of the Seoul agenda. The Frontier AI Safety Commitments survived as a tracking exercise but did not gain new substantive obligations at Paris, and the institutional centre of gravity moved toward investment announcements such as the EU's InvestAI initiative and France's Current AI Foundation.
The fourth summit in the series, the AI Impact Summit, was held in New Delhi in February 2026 and was endorsed by 92 governments. Its New Delhi Frontier AI Impact Commitments referenced the Seoul Commitments as a precedent but again broadened the language away from severe-risk thresholds toward inclusive deployment. The trajectory from Bletchley to Seoul to Paris to New Delhi is therefore a story of expanding participation and broadening agenda, with Seoul as the high-water mark of the safety-first framing.
Several questions raised at Seoul remain open as of 2026.
First, whether voluntary frontier safety frameworks can be made auditable. The Seoul Commitments require publication of a framework but say nothing about external verification. Researchers at the Centre for the Governance of AI, METR and the AI Security Institute have proposed standards for third-party evaluation; these have not been formally adopted.
Second, whether the China gap is a permanent feature of the summit series. Beijing's exclusion from Seoul and its limited engagement with Paris and New Delhi suggest the summits are unlikely to host the kind of US-China safety dialogue that some at Bletchley imagined.
Third, whether any of the substantive commitments will harden into binding rules. The EU has the most enforcement infrastructure, and the EU AI Act's general-purpose provisions inherit some of the Seoul language. The UK has signalled a more legislative approach since 2025. The US, under the second Trump administration, has rolled back the executive order that underpinned its AI Safety Institute and renamed the body the Center for AI Standards and Innovation. The durability of the Seoul settlement is therefore tied to questions of US domestic policy that have nothing to do with the declaration itself.