The Bletchley Declaration is an international political statement on the safety of frontier AI systems, signed on 1 November 2023 by 28 countries and the European Union at the AI Safety Summit hosted by the United Kingdom at Bletchley Park, Buckinghamshire. It was the first agreement of its kind among states with active interests in advanced artificial intelligence, and the first international document to recognise that the most capable general-purpose AI models pose risks of catastrophic, even existential, harm. The declaration is non-binding. It does not impose duties on signatory governments or on AI developers, and it does not create any enforcement body. Its function is diplomatic and rhetorical: to establish a shared vocabulary, to acknowledge a shared problem, and to commit governments to a continuing process of cooperation [1].
The document was conceived and drafted by the British government under Prime Minister Rishi Sunak, who saw it as the centrepiece of a broader UK pitch to lead global discussion of frontier AI safety. It was negotiated in the weeks before the summit between UK officials and counterparts from the United States, China, the European Commission, and the other invited governments. The text itself runs to roughly 1,300 words. It is unusual in modern multilateral diplomacy in that it secured the joint signature of the United States and China at a moment of severe tension between the two countries, and in that it placed the long-term, speculative risks of AI alongside more familiar concerns such as bias, privacy, and labour market disruption [2].
By mid-2023 the rate of progress in large language models had created a sense of urgency in many capitals. The release of GPT-4 in March 2023 and the rapid uptake of ChatGPT had pushed AI onto the agendas of cabinets and parliaments that had previously treated it as a specialist topic. In May 2023 the Center for AI Safety published a one-sentence statement signed by hundreds of researchers and executives, including Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, and Dario Amodei of Anthropic, asserting that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" [3]. Two months earlier the Future of Life Institute had circulated an open letter calling for a six-month pause on training of systems more capable than GPT-4. These statements brought the language of catastrophic risk into mainstream policy debate for the first time.
Governments had already begun to act. The EU AI Act had been under negotiation since April 2021 and, by mid-2023, was approaching a final inter-institutional deal that would bind providers of high-risk and general-purpose AI systems under European law. China had introduced binding rules on generative AI, in force from August 2023. In the United States, the White House had collected voluntary commitments from seven leading AI developers in July 2023 (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI), and was preparing the executive order on AI that President Joe Biden would sign on 30 October 2023, just two days before the summit opened. The G7 had agreed an International Code of Conduct for Advanced AI Systems through the Hiroshima Process on 30 October 2023.
Against this backdrop the British government announced on 7 June 2023 that it would host the world's first global summit on AI safety. The aim, as set out in the official invitation, was to focus on the most capable models and to forge an agreed international response to the risks they might create. Sunak's team coupled the summit announcement with the creation of a Frontier AI Taskforce in April 2023, chaired by the entrepreneur and investor Ian Hogarth, and an initial allocation of 100 million pounds. The taskforce would later become the UK AI Safety Institute.
Bletchley Park was chosen for its symbolism. The country house in Milton Keynes, about 80 kilometres north-west of London, was the wartime home of the British signals intelligence operation that broke German Enigma traffic. Alan Turing did much of his work there. The British government wanted the summit to evoke that history of national leadership in computing and to plant the idea that another moment of decisive technical and political coordination was at hand. The site is now a museum, and the summit was held in the renovated mansion and adjoining buildings.
The drafting of the declaration involved several months of bilateral and small-group negotiation. The most difficult conversations were with China. Some commentators, including a number of British members of parliament, had argued that China should not be invited at all. The Sunak government took the opposite view, on the basis that no document on global AI risks could be credible without a Chinese signature. China was represented at the summit by Vice Minister of Science and Technology Wu Zhaohui. The text of the declaration is written in language that allows each signatory to interpret "safety" and "risk" in line with its own legal traditions, which was essential to securing a Chinese signature [4].
The European Union signed alongside its member states because the European Commission has competence over digital policy. Several large economies were not invited, including Russia. South Africa was invited but its representation was limited. Some smaller states with active AI policy work were also invited; Rwanda, Kenya, and Nigeria are signatories, while Saudi Arabia and the United Arab Emirates joined as Gulf participants.
The AI Safety Summit ran on 1 and 2 November 2023. About 150 delegates attended in person, representing governments, AI labs, academia, and civil society. The first day was dedicated to round-table discussions on five themes: risks to global safety from frontier AI misuse, risks from unpredictable advances in capabilities, risks from loss of control over advanced AI, integration of AI into society, and how the international community can best address risks. The second day, attended by a smaller subset of leaders and senior figures, was reserved for high-level discussions of next steps and concluded with the announcement of the UK AI Safety Institute [5].
US Vice President Kamala Harris led the American delegation. The European Commission was represented by President Ursula von der Leyen. UN Secretary-General Antonio Guterres attended. Italian Prime Minister Giorgia Meloni was the only G7 leader other than Sunak to attend in person. King Charles III delivered a video address. From the AI industry, attendees included Sam Altman of OpenAI, Demis Hassabis of Google DeepMind, Dario Amodei of Anthropic, Microsoft President Brad Smith, AWS CEO Adam Selipsky, Mustafa Suleyman of Inflection AI, Nick Clegg of Meta AI, and Elon Musk of xAI. On 2 November Sunak conducted a live conversation with Musk on X.
The UK government published several documents in the weeks immediately around the summit. The most consequential was a discussion paper titled "Frontier AI: capabilities and risks," which surveyed the state of the field and outlined the categories of harm the British government wanted to focus on, including misuse for chemical and biological weapons design, cyberattack at scale, large-scale disinformation, and the loss of meaningful human control over highly capable systems. The Chair's Summary released by the UK at the close of the summit set out the consensus reached on those five themes and announced that South Korea would co-host a follow-up event within six months, with France hosting a full in-person summit roughly a year later [5].
The declaration is short. It opens with a single-paragraph preamble affirming that AI "presents enormous global opportunities" with the potential to "transform and enhance human wellbeing, peace and prosperity," and stating that AI "should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible" [1].
The document notes that AI systems are already deployed across many domains of daily life and that this brings the need to address the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy, and data protection. It then turns to its central concern, frontier AI, which it defines as "highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks," together with relevant specific narrow AI that could exhibit capabilities that cause harm, "which match or exceed the capabilities present in today's most advanced models."
The key passage on catastrophic risk reads: "There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models." The signatories list cybersecurity and biotechnology as areas of particular concern, and they mention the amplification of risks such as disinformation. The text also recognises the need to address risks beyond frontier AI, including bias, privacy, and the impact on the labour market, but the document is structured around the frontier as its principal subject [1].
The operative section of the declaration is built around two commitments, sometimes referred to as the two pillars of the Bletchley process [6]:
To build a shared scientific and evidence-based understanding of the risks posed by frontier AI, and to maintain that understanding as capabilities continue to grow. The signatories agreed in particular to support "an internationally inclusive network of scientific research on frontier AI safety."
To build respective risk-based policies across signatory countries to ensure safety in light of those risks, while recognising that approaches will differ from one country to another and that international cooperation should respect those differences.
The declaration also affirms that "those developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems, including through systems for safety testing, through evaluations, and by other appropriate measures." This sentence is the closest the declaration comes to placing duties on private companies; it does so in the language of acknowledged responsibility rather than legal obligation.
The declaration ends with a commitment by signatories to sustain an inclusive global dialogue, including through existing international forums and other relevant initiatives, and to reconvene in 2024 to take stock of progress. The text closes with the formal endorsement of the listed countries and the European Union [1].
The most quoted passages of the declaration are:
"AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible."
"There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models."
"We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI."
Twenty-eight states plus the European Union signed the declaration on 1 November 2023. New Zealand subsequently joined on 23 October 2024, bringing the total to 29 states plus the EU [1]. The signatories span six continents and cover most of the world's leading AI developers and adopters.
| Signatory | Region | Notes |
|---|---|---|
| Australia | Oceania | Represented by Minister for Industry and Science Ed Husic |
| Brazil | South America | First Latin American AI Strategy in 2021 |
| Canada | North America | Already had a Voluntary Code of Conduct on Generative AI |
| Chile | South America | Represented by Minister of Science |
| China | Asia | Represented by Vice Minister Wu Zhaohui; signed only the day-one declaration |
| European Union | Europe | Represented by Commission President Ursula von der Leyen |
| France | Europe | Slated to host the third summit (Paris, February 2025) |
| Germany | Europe | Represented by Federal Minister Volker Wissing |
| India | Asia | Represented by Minister of State Rajeev Chandrasekhar |
| Indonesia | Asia | First Southeast Asian signatory beyond Singapore and the Philippines |
| Ireland | Europe | Hosts European HQs of OpenAI and Google |
| Israel | Middle East | One of two Middle Eastern democracies to sign |
| Italy | Europe | Prime Minister Giorgia Meloni attended in person |
| Japan | Asia | Lead state in the parallel G7 Hiroshima Process |
| Kenya | Africa | One of three African signatories |
| Kingdom of Saudi Arabia | Middle East | Significant new investor in AI infrastructure |
| Netherlands | Europe | Home to ASML, the lithography company central to AI chip supply |
| Nigeria | Africa | Africa's largest AI talent base |
| Philippines | Asia | Represented by Department of Trade and Industry |
| Republic of Korea | Asia | Co-host of the AI Seoul Summit (May 2024) |
| Rwanda | Africa | Active in regional AI strategy work |
| Singapore | Asia | Already operating its own AI evaluation framework, AI Verify |
| Spain | Europe | Held the EU presidency at the time of the summit |
| Switzerland | Europe | Home to ETH Zurich and EPFL AI research labs |
| Turkey | Eurasia | Represented as Republic of Turkiye in the official text |
| Ukraine | Europe | Signed despite the ongoing war with Russia |
| United Arab Emirates | Middle East | Hosts the Technology Innovation Institute, developer of Falcon |
| United Kingdom | Europe | Host nation; PM Rishi Sunak chaired the summit |
| United States | North America | Represented by Vice President Kamala Harris |
| New Zealand | Oceania | Joined later, on 23 October 2024 |
Notable absences include Russia, which was not invited; the African Union as a body; and most of Africa beyond Kenya, Nigeria, and Rwanda. Mexico, Argentina, and Vietnam, all states with significant AI activity, did not sign.
The Bletchley Declaration itself does not impose duties on private companies. The labs nevertheless used the summit as an opportunity to make a series of related voluntary commitments, both in side discussions and in published statements. The most concrete agreement was a commitment by major frontier developers to allow governments to evaluate the next generation of their models before public release, in cooperation with the newly announced UK AI Safety Institute and its later partner organisations.
| Lab | Bletchley-period commitment | Related framework |
|---|---|---|
| Anthropic | Allow pre-deployment access by the UK Frontier AI Taskforce, later UK AISI; CEO Dario Amodei presented Anthropic's Responsible Scaling Policy as a possible model for other developers | Responsible Scaling Policy, first published 19 September 2023 |
| OpenAI | Pre-deployment access to UK AISI; published its Preparedness Framework in December 2023, two months after the summit | Preparedness Framework |
| Google DeepMind | Pre-deployment access to UK AISI; published its Frontier Safety Framework in May 2024 ahead of the AI Seoul Summit | Frontier Safety Framework |
| Meta AI | Pre-deployment evaluation access for Llama-class models; later joined the Seoul Frontier AI Safety Commitments | Frontier AI Framework, published February 2025 |
| Microsoft | Pre-deployment evaluation access; reaffirmed commitments from the White House July 2023 voluntary commitments | Microsoft Responsible AI Standard |
| Amazon | Pre-deployment evaluation access; supported Bletchley process through AWS | AWS Responsible AI policies |
| Inflection AI | Mustafa Suleyman attended; Inflection signed onto pre-deployment testing | Acquired by Microsoft in March 2024 |
| xAI | Elon Musk attended; less detailed published framework | Released its own risk framework in 2025 |
The pre-deployment testing arrangement was the most operationally significant outcome of the summit for the AI industry. It established a precedent that government safety institutes could be given access to frontier models before public release, a degree of access that no government had previously enjoyed. The arrangement remained voluntary and bilateral, and the UK AISI built up the capacity to make use of it through 2024 [7].
Dario Amodei used his Bletchley remarks to argue that Anthropic's Responsible Scaling Policy was a prototype that other labs and regulators could adapt, and that the policy was "not a substitute for regulation" [8]. The framing helped to establish responsible scaling as a recognised category in AI governance, even though only Anthropic had a fully published policy at the time of the summit.
Reaction to the Bletchley Declaration split along familiar lines.
The UK government called the declaration "a landmark achievement that sees the world's greatest AI powers agree on the urgency behind understanding the risks of AI" [9]. Vice President Harris welcomed the declaration but used her own speech in London to argue for a broader concept of AI safety that included algorithmic discrimination and the impact of AI on work, framing the Biden executive order as a wider response. The European Commission stressed that the declaration was complementary to the EU AI Act, not a substitute for it. China's representative welcomed the document as evidence that international cooperation on AI was possible despite wider geopolitical tensions [4].
Reception in civil society was more mixed. The Ada Lovelace Institute, whose interim director Francine Bennett was one of a small number of civil society representatives at the summit, argued in published commentary that the declaration was a useful first step but that voluntary commitments and aspirational language would not be sufficient. Ada called for context-specific evaluation, statutory powers for sectoral regulators in the UK, and a move beyond reliance on lab self-governance [10].
A number of NGOs, including the Algorithmic Justice League, complained that civil society had been largely shut out of the closed-door discussions, that the agenda was tilted towards speculative existential risk and away from current harms such as bias and surveillance, and that the summit had given disproportionate prominence to a small number of large AI companies. Open Markets Institute and other groups argued that the declaration paid insufficient attention to market concentration in the AI sector. Critics also noted that some of the founding signatories of the May 2023 "extinction risk" statement were the same executives whose models were the subject of the summit, raising concerns that the declaration was shaped by industry framing.
A frequent line of criticism was that the declaration was a poor substitute for binding regulation. The EU AI Act, then in trilogue negotiation, would impose enforceable obligations on providers and deployers of AI systems, with fines of up to 35 million euro or 7 per cent of global revenue for the most serious infractions. The declaration imposed no obligations and contemplated none. Defenders of the Bletchley approach replied that the EU AI Act could not bind non-European actors, that an inclusive process needed to be lighter on legal commitments to keep states like China and the United States at the table, and that the declaration's value lay in establishing a process rather than a rulebook [2].
Reuters, the BBC, the Financial Times, and The Guardian all framed the document as a diplomatic success for Sunak, while noting its non-binding character. The Guardian's analysis observed that the summit had succeeded in placing existential risks on the international agenda but had said little about more immediate harms. Reuters emphasised the rare instance of US-China cooperation on a technology policy issue. The Lancet Digital Health later published a peer-reviewed comment characterising Bletchley as the start of a process whose value would be determined by what followed at Seoul and Paris [11].
The declaration is now best understood as the founding document of the AI Safety Summit series, an ongoing track of biennial international meetings that has continued at Seoul (May 2024), Paris (February 2025), and New Delhi (February 2026).
The most concrete institutional legacy is the network of national AI safety institutes. The UK announced its institute on 2 November 2023, the second day of the summit, when Sunak set out plans for what was then called the UK AI Safety Institute. The United States announced an equivalent body within the National Institute of Standards and Technology, the US AI Safety Institute, on the same day. Within months these were joined by safety institutes in Japan, Singapore, Canada, and elsewhere, and by 2024 a formal international network of AI safety institutes had been launched alongside the Seoul Summit. The UK institute was renamed the UK AI Security Institute in February 2025, while the US institute was reorganised into the Center for AI Standards and Innovation under the Trump administration in 2025 [12].
The summit also commissioned Yoshua Bengio to lead an International AI Safety Report, published in January 2025 ahead of the Paris summit. That report has become a reference document for policymakers and a recurring deliverable of the summit process.
| Summit | Date | Host | Key outcome | Relationship to Bletchley |
|---|---|---|---|---|
| Bletchley | 1-2 November 2023 | UK (Sunak) | Bletchley Declaration; UK AISI announced | Founding event |
| AI Seoul Summit | 21-22 May 2024 | South Korea / UK | Seoul Declaration; Frontier AI Safety Commitments by 16 companies | First follow-up; built on the two-pillar framework |
| AI Action Summit (Paris) | 10-11 February 2025 | France (Macron) | Statement on Inclusive and Sustainable AI; broadened agenda | Shifted focus from safety toward economic opportunity; US and UK declined to sign |
| AI Impact Summit (New Delhi) | 16-21 February 2026 | India (Modi) | India AI Impact Summit Declaration; 92 signatories; New Delhi Frontier AI Impact Commitments | Largest summit by participation; agenda focused on inclusive development |
The AI Seoul Summit on 21-22 May 2024 produced the Seoul Declaration, which built directly on the two pillars of Bletchley by introducing shared concepts of severe risk and intolerable risk. Its parallel outcome, the Frontier AI Safety Commitments, was signed by 16 leading AI companies, including all of the major Western frontier developers and several from China, Israel, the UAE, and South Korea. Each signatory committed to publish a safety framework focused on severe risks before the Paris summit, a deadline that prompted the publication of the Google DeepMind Frontier Safety Framework, the Anthropic Responsible Scaling Policy v2, the OpenAI Preparedness Framework v2, and the Meta Frontier AI Framework [13].
The AI Action Summit in Paris in February 2025 marked a shift in tone. Hosted by President Emmanuel Macron, it broadened the agenda from frontier risks to economic opportunity, public goods, and sustainability. Its concluding Statement on Inclusive and Sustainable AI was signed by 61 nations and international organisations but, in a striking break from Bletchley, both the United States and the United Kingdom refused to sign. The US delegation, led by Vice President JD Vance, argued that the declaration insufficiently protected American interests in maintaining technological leadership; the UK government said the text did not provide enough clarity on global governance or on national security risks. The Paris episode was the first sign that the consensus reached at Bletchley might not be durable.
Despite the strain visible by Paris, several elements of the Bletchley framework have endured. The international network of AI safety institutes is now active across at least eleven jurisdictions. Pre-deployment testing of frontier models by government bodies has become a recurring feature of major model releases. The vocabulary of frontier AI, severe risk, and responsible scaling, all foregrounded by the declaration, is now standard in policy texts ranging from the EU AI Act's General-Purpose AI Code of Practice to corporate governance frameworks. The biennial cadence of summits set in motion at Bletchley continues, with Switzerland announced as the host of a 2027 follow-on event and the United Nations planning its first global forum on AI in July 2026 [12].
The declaration is best read, then, as a beginning. It put a small number of important ideas, frontier AI, catastrophic risk, lab responsibility, government safety testing, into the international diplomatic record at a moment when none of them was settled. Whether the process it launched will be capable of keeping pace with the technology it was meant to address remains the open question of AI governance.