AI governance refers to the collection of frameworks, norms, standards, policies, and institutional arrangements that guide the development, deployment, and use of artificial intelligence systems. While AI regulation focuses specifically on binding laws and legal mandates, AI governance is a broader concept that encompasses voluntary standards, corporate policies, international agreements, technical practices, and multi-stakeholder coordination mechanisms. The goal of AI governance is to ensure that AI technologies are developed and used in ways that are safe, fair, transparent, and aligned with human values.
As AI capabilities have advanced rapidly, particularly with the rise of large language models and generative AI, governance efforts have intensified at every level: international organizations, national governments, industry bodies, and civil society groups have all worked to establish rules and norms for responsible AI. The field continues to evolve as policymakers grapple with the challenge of keeping pace with technological change while balancing innovation, safety, and rights protection.
AI governance and AI regulation are related but distinct concepts. Regulation refers to formal, legally binding rules enacted by governments or regulatory bodies, such as the EU AI Act. Governance, by contrast, is the broader ecosystem of mechanisms through which societies steer and manage AI development. This includes:
This distinction matters because much of the current AI governance landscape consists of non-binding frameworks and voluntary commitments that operate alongside or ahead of formal regulation. In many jurisdictions, binding AI-specific laws remain limited, making soft governance instruments critical for shaping industry behavior.
Across virtually all major AI governance frameworks, several core principles recur. While the exact wording varies, these principles represent a broad international consensus on what responsible AI development requires.
| Principle | Description |
|---|---|
| Transparency | AI systems should be understandable and explainable. Organizations should disclose how AI systems work, what data they use, and how decisions are made. |
| Accountability | Clear lines of responsibility must exist for AI system outcomes. Developers and deployers should be answerable for harms caused by their systems. |
| Fairness and non-discrimination | AI systems should not produce biased or discriminatory outcomes. Developers must test for and mitigate bias across demographic groups. |
| Safety and robustness | AI systems should function reliably and securely, with safeguards against misuse, manipulation, and failure. |
| Human oversight | Meaningful human control should be maintained over AI systems, particularly in high-stakes decisions affecting people's rights and safety. |
| Privacy and data protection | AI systems must respect individuals' privacy rights and handle personal data in compliance with applicable data protection laws. |
| Environmental sustainability | AI development and deployment should account for environmental costs, including the energy consumption of training and running large models. |
| Inclusiveness | The benefits of AI should be broadly shared, and governance processes should include diverse voices, including those from developing nations and marginalized communities. |
AI governance has become a major focus of international diplomacy and multilateral cooperation. Several key international initiatives have shaped the global governance landscape.
The Organisation for Economic Co-operation and Development (OECD) adopted its Recommendation on Artificial Intelligence in May 2019, making it the first intergovernmental standard on AI. Originally endorsed by 42 countries (36 OECD members plus six partner nations), the OECD AI Principles were also adopted by G20 leaders at their summit in Osaka, Japan, in June 2019.
The principles are organized around five values-based principles for the responsible stewardship of trustworthy AI and five recommendations for national policies and international cooperation. They call for AI that is innovative and trustworthy, respects human rights and democratic values, operates transparently, functions in a robust and safe manner, and includes mechanisms for accountability.
In May 2024, the OECD updated the principles to address challenges posed by general-purpose and generative AI. The update added provisions on environmental sustainability, the risks of misinformation and disinformation amplified by AI, bias, and the protection of intellectual property rights. As of the 2024 update, the principles have been endorsed by 47 jurisdictions.
In October 2023, United Nations Secretary-General Antonio Guterres established the High-level Advisory Body on Artificial Intelligence (HLAB-AI), a 39-member group comprising experts from government, industry, academia, and civil society. The body published an interim report in December 2023 and its final report, titled "Governing AI for Humanity," in September 2024.
The final report presented seven recommendations for global AI governance:
These recommendations informed the Global Digital Compact, adopted by UN member states on 22 September 2024 during the Summit of the Future as an annex to the Pact for the Future. The Compact committed to establishing an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. In August 2025, the UN General Assembly approved these initiatives by consensus.
Launched at the G7 Summit in Hiroshima, Japan, in May 2023, the Hiroshima AI Process (HAP) focused specifically on governance of advanced and generative AI. Under Japan's G7 presidency, the process produced the Hiroshima AI Process Comprehensive Policy Framework, which was agreed at the G7 Digital and Tech Ministers' Meeting in December 2023 and endorsed by G7 leaders.
The framework includes guiding principles and a code of conduct for organizations developing advanced AI systems. It was the first successful international framework comprising both guiding principles and a code of conduct aimed at addressing the societal and economic impacts of advanced AI. Under Italy's G7 presidency in 2024, the process was further advanced, and a Reporting Framework was launched on 7 February 2025 to promote transparency and accountability in the development of advanced AI systems.
| Initiative | Year Established | Scope | Key Outputs |
|---|---|---|---|
| OECD AI Principles | 2019 (updated 2024) | 47 endorsing jurisdictions | First intergovernmental AI standard; values-based principles and policy recommendations |
| G20 AI Principles | 2019 | G20 member nations | Endorsed OECD principles at Osaka summit |
| G7 Hiroshima AI Process | 2023 | G7 nations | Comprehensive policy framework, code of conduct, reporting framework |
| UN High-level Advisory Body on AI | 2023 | Global (39 members) | "Governing AI for Humanity" report with seven recommendations |
| Global Digital Compact | 2024 | All UN member states | Commitments to International Scientific Panel, Global Dialogue, Global Fund on AI |
| ISO 42001 | 2023 | Global (voluntary standard) | First AI management system standard for organizational certification |
| International Network of AI Safety Institutes | 2024 | 10+ countries and EU | Coordination on model evaluation, synthetic content risks, safety research |
Different countries have adopted distinct approaches to AI governance, reflecting their varying priorities around innovation, safety, rights protection, and national security.
The United States has taken a shifting approach to AI governance. Under the Biden administration, Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence was signed on 30 October 2023. This order established a government-wide framework for responsible AI, requiring developers of the most powerful AI systems to share safety test results with the federal government, directing agencies to set standards for AI safety and security, and addressing issues of equity, civil rights, consumer protection, privacy, and innovation.
The National Institute of Standards and Technology (NIST) had previously released the AI Risk Management Framework (AI RMF) in January 2023, a voluntary framework to help organizations identify, assess, and manage AI-related risks. In July 2024, NIST published the Generative AI Profile (NIST AI 600-1), extending the framework to address risks specific to generative AI models.
On 20 January 2025, President Donald Trump revoked Executive Order 14110 on his first day in office. Three days later, he signed Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence," which directed agencies to review and potentially rescind policies from the previous administration that conflicted with the new approach of promoting AI development free from what the administration described as unnecessary regulatory burdens. In June 2025, the US AI Safety Institute at NIST was renamed the Center for AI Standards and Innovation (CAISI), signaling a shift in emphasis from safety toward standards and innovation, though the center's core functions of evaluating models and developing voluntary standards continued.
The United States has not enacted comprehensive federal AI legislation as of early 2026. Congress has introduced numerous AI-related bills but has not passed a broad AI law comparable to the EU AI Act. Individual states, notably Colorado and California, have pursued their own AI legislation.
The EU AI Act, formally adopted in 2024, is the world's first comprehensive, legally binding framework for regulating AI. The regulation takes a risk-based approach, categorizing AI systems into four tiers with corresponding obligations.
| Risk Level | Description | Examples | Requirements |
|---|---|---|---|
| Unacceptable (Prohibited) | AI applications incompatible with EU fundamental rights and values | Social scoring by governments, real-time remote biometric identification in public spaces (with limited exceptions), subliminal manipulation, exploitation of vulnerabilities | Banned entirely |
| High Risk | AI systems posing significant risks to health, safety, or fundamental rights | AI in critical infrastructure, education, employment, law enforcement, migration, justice | Conformity assessments, risk management systems, data governance, human oversight, technical documentation |
| Limited Risk | AI systems with risk of deceiving users | Chatbots, deepfake generators, emotion recognition systems | Transparency obligations (users must be informed they are interacting with AI) |
| Minimal Risk | All other AI systems | Spam filters, AI-enabled video games | No mandatory obligations (voluntary codes of practice encouraged) |
The Act entered into force on 1 August 2024, with a phased implementation timeline. Prohibitions on unacceptable-risk AI and AI literacy requirements took effect on 2 February 2025. Obligations for general-purpose AI (GPAI) models, including large language models, became applicable on 2 August 2025. Most high-risk AI requirements apply from 2 August 2026, with obligations for high-risk AI embedded in already-regulated products (such as medical devices) extending to 2 August 2027.
Penalties under the Act are substantial: up to 35 million euros or 7% of global annual turnover for prohibited practices, up to 15 million euros or 3% for violations of high-risk AI obligations, and up to 7.5 million euros or 1.5% for supplying incorrect information to authorities.
The United Kingdom has pursued what it calls a "pro-innovation" approach to AI governance. Rather than enacting a comprehensive AI law, the UK government published a white paper in March 2023 outlining a principles-based, non-statutory, and cross-sector framework. Existing regulators (such as the Financial Conduct Authority, the Medicines and Healthcare products Regulatory Agency, and the Information Commissioner's Office) are expected to apply five cross-cutting principles to AI within their respective domains: safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
In January 2025, the UK Labour government launched an AI Opportunities Action Plan focused on leveraging AI for economic growth, including proposals for AI growth zones, new infrastructure, and a National Data Library. The government also announced plans to introduce legislation making voluntary agreements with AI developers legally binding.
In October 2025, the Department for Science, Innovation and Technology (DSIT) proposed a UK AI Growth Lab, involving cross-economy sandboxes for safely testing AI innovations under targeted regulatory modifications.
China has adopted a distinctive, incremental approach to AI governance, issuing a series of targeted regulations addressing specific AI applications rather than a single comprehensive law.
| Regulation | Effective Date | Scope |
|---|---|---|
| Administrative Provisions on Algorithm Recommendation for Internet Information Services | March 2022 | Regulates algorithmic recommendation in news feeds, search, short video, and other online services |
| Administrative Provisions on Deep Synthesis of Internet-based Information Services | January 2023 | Governs AI-generated or altered video, voice, text, and image content (deepfakes); requires labeling and traceability |
| Interim Measures for Administration of Generative AI Services | August 2023 | First binding national rules for generative AI; requires security assessments and model filing with the Cyberspace Administration of China |
| Interim Measures for Science and Technology Ethics Review | December 2023 | Requires ethical review of AI research and development activities |
| Measures for the Labelling of AI-Generated and Synthetic Content | September 2025 | Mandates implicit and explicit labeling of all AI-generated content across text, images, audio, video, and virtual scenes |
China's regulations emphasize content control and alignment with state values, requiring generative AI services to align with "core socialist values" and avoid content that harms state security or national unity. At the same time, China has been an active participant in international AI governance discussions, including signing the Bletchley Declaration in November 2023.
| Country/Region | Approach | Key Instrument(s) | Distinguishing Features |
|---|---|---|---|
| United States | Executive orders, voluntary frameworks, sector-specific | EO 14110 (revoked), EO 14179, NIST AI RMF, CAISI | Shifted from safety-focused to innovation-focused under Trump administration; no comprehensive federal law |
| European Union | Comprehensive legislation | EU AI Act | Risk-based tiers; first comprehensive AI law globally; significant fines for non-compliance |
| United Kingdom | Pro-innovation, principles-based | White paper, AI Opportunities Action Plan | Relies on existing sectoral regulators; non-statutory principles; AI growth focus |
| China | Incremental, application-specific regulations | Algorithm, deep synthesis, generative AI, and labeling regulations | Content control emphasis; early mover on binding generative AI rules; state-value alignment requirements |
| Japan | Light-touch, innovation-friendly | AI guidelines, Hiroshima AI Process leadership | Hosted G7 Hiroshima Process; collaborative approach with industry |
| Canada | Legislation in progress | Artificial Intelligence and Data Act (AIDA, proposed) | Proposed as part of broader digital charter legislation |
| Brazil | Legislation in progress | AI regulatory framework bill | Advancing comprehensive AI legislation through congress |
| Singapore | Voluntary governance framework | Model AI Governance Framework, AI Verify toolkit | Practical, testable governance tools; voluntary but influential in Asia |
AI Safety Institutes (AISIs) are state-backed, specialized organizations focused on evaluating AI systems, conducting safety research, and facilitating information exchange among governments, industry, and academia. The AISI model emerged as one of the most concrete institutional innovations in AI governance during 2023 and 2024.
The concept originated with the United Kingdom, which established the Frontier AI Taskforce in April 2023 (later renamed the AI Safety Institute in November 2023 following the Bletchley Park summit). The United States announced its own AI Safety Institute in November 2023, housed within NIST. Japan followed in early 2024.
A second wave of institutes has since emerged, with countries including Canada, France, Germany, Singapore, India, South Korea, Kenya, Brazil, and Israel establishing or announcing their own AISIs.
Both the UK and US institutes underwent significant rebranding in 2025. On 14 February 2025, UK Technology Secretary Peter Kyle announced at the Munich Security Conference that the UK AI Safety Institute would become the AI Security Institute. The renamed body focuses on AI risks with security implications, such as chemical and biological weapon development, cyberattacks, and serious crimes, while explicitly excluding bias and freedom-of-speech concerns from its remit.
In June 2025, the US AI Safety Institute was renamed the Center for AI Standards and Innovation (CAISI) by Commerce Secretary Howard Lutnick. While the core functions of model evaluation and voluntary standard development continued, the rebrand signaled the Trump administration's preference for framing AI policy around innovation and standards rather than safety.
At the Seoul AI Summit in May 2024, ten countries and the EU pledged to establish AI safety institutes and an international coordination network. The inaugural convening of the International Network of AI Safety Institutes took place in San Francisco in November 2024, co-hosted by the US Department of Commerce and Department of State. The network focuses on three areas: managing synthetic content risks, testing foundation models, and conducting risk assessments for advanced AI systems.
A series of high-profile international summits on AI safety and governance have served as key venues for building political consensus and launching concrete initiatives.
The first AI Safety Summit, convened by the United Kingdom at Bletchley Park on 1-2 November 2023, was a landmark event in AI governance. Twenty-eight countries, including the United States, China, and the European Union, signed the Bletchley Declaration, which identified frontier AI safety risks as a matter of shared international concern, particularly in domains such as cybersecurity and biotechnology, and called for international cooperation to address them.
The summit also resulted in a policy paper on AI safety testing, signed by ten countries and leading technology companies, and it catalyzed the creation of the first AI Safety Institutes.
The second summit, co-hosted by South Korea and the United Kingdom on 21-22 May 2024, advanced the Bletchley agenda with more concrete commitments. The Frontier AI Safety Commitments were signed by 16 AI companies, including Anthropic, Google, Meta, Microsoft, OpenAI, Amazon, Mistral AI, Samsung, xAI, Naver, Zhipu.ai, Cohere, IBM, and NVIDIA. Companies committed to not deploying AI systems if risks cannot be sufficiently mitigated, implementing red-teaming and safety evaluation practices, setting unacceptable risk thresholds, and publicly reporting model capabilities and limitations.
The Seoul Declaration was also adopted, and countries pledged to establish the International Network of AI Safety Institutes.
The AI Action Summit, held at the Grand Palais in Paris on 10-11 February 2025 and co-chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, marked a shift in tone from the previous summits. The summit's framing emphasized AI's benefits and opportunities, with some observers noting a reduced focus on safety compared to Bletchley Park and Seoul.
Fifty-eight countries signed the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, which outlined principles including accessibility, transparency, open development, positive labor market outcomes, and environmental sustainability. The United States and United Kingdom declined to sign the statement.
Key outputs included the publication of the first International AI Safety Report (released on 29 January 2025), led by Turing Award-winning computer scientist Yoshua Bengio and drawing on contributions from 96 experts across 30 countries. The summit also launched Current AI with a $400 million investment in public interest AI, and formed an environmental sustainability coalition with 91 partners.
Anthropic CEO Dario Amodei publicly described the Paris summit as a "missed opportunity" on safety.
Beyond government action, AI companies have developed internal governance structures and practices for responsible AI development and deployment.
Major technology companies have established dedicated responsible AI teams and review processes. These teams typically conduct internal audits, develop guidelines for model development, evaluate systems for bias and safety risks, and advise product teams on ethical deployment. Companies such as Google, Microsoft, Meta, and Anthropic maintain responsible AI teams, though the specific structures and authority of these teams vary significantly across organizations.
Model cards, introduced in a 2019 research paper by Margaret Mitchell, Timnit Gebru, and colleagues, are standardized documents that describe a machine learning model's intended use, performance characteristics, training data, known limitations, and potential biases. They provide a concise snapshot of each model's strengths, failure modes, licensing terms, and governance contacts.
System cards extend this concept to cover entire AI systems rather than individual models alone. A system card documents the full AI system including its intended use, data considerations, component interactions, and operational limitations. System cards are particularly valuable for complex deployments where multiple models and components interact.
Both model cards and system cards serve as governance tools by creating transparent documentation that enables external auditing, regulatory compliance, and informed decision-making by downstream users.
Several frontier AI laboratories have developed frameworks that tie safety measures to the assessed capability level of their models.
Anthropic introduced its Responsible Scaling Policy (RSP), which defines AI Safety Levels (ASLs) that scale safeguards with model capability. The policy sets clear thresholds at which tighter controls become required and emphasizes pre-deployment testing, security hardening, and operational oversight. Anthropic's framework includes a Responsible Scaling Officer, anonymous internal whistleblowing channels, and public capability reports. The RSP has been updated multiple times, with Version 3.0 released in February 2026.
OpenAI published its Preparedness Framework, which assesses models across risk categories including biological and chemical threats, cybersecurity, and autonomous AI capabilities. OpenAI explicitly pledges to halt training for models assessed at "critical risk" levels.
Google DeepMind established its Frontier Safety Framework, which similarly focuses on monitoring for dangerous capabilities in the areas of biosecurity, cybersecurity, and AI self-improvement.
An independent analysis of these frameworks found that while all three commit to testing for dangerous capabilities and gating deployment behind safeguards, they remain vague on key details, particularly regarding mitigation strategies for highly autonomous or self-improving AI systems.
On 21 July 2023, seven leading AI companies, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, announced voluntary commitments at the White House to develop AI in a safe and trustworthy manner. Additional companies joined in September 2023 (Adobe, Cohere, IBM, NVIDIA, Palantir, Salesforce, Scale AI, and Stability AI), and Apple joined in July 2024.
The commitments covered areas including pre-release safety testing (red-teaming), information sharing about risks and vulnerabilities, investment in cybersecurity, research on societal risks, development of technical mechanisms for identifying AI-generated content (such as watermarking), and public reporting on model capabilities and limitations.
An assessment of company compliance with the commitments, published in 2025, found uneven adherence. The top-scoring company (OpenAI) achieved 83% compliance on the evaluation rubric, while the bottom-scoring company (Apple) reached only 13%. The commitments brought improvements in red-teaming practices and watermarking but did not deliver meaningful transparency or accountability.
The Frontier Model Forum is a nonprofit organization founded in 2023 by Anthropic, Google, Microsoft, and OpenAI, with Amazon and Meta joining subsequently. The Forum aims to advance AI safety research, identify best practices for the responsible development of frontier models, collaborate with governments and civil society on AI governance, and support efforts to address society's greatest challenges using AI.
The Forum announced over $10 million for an AI Safety Fund to support independent safety research. It operates as an industry-led body complementing governmental and international governance efforts.
The governance of open-source AI models has become one of the most contentious issues in AI policy. Open foundation models, where model weights are publicly released, present both significant benefits and distinct governance challenges.
Proponents argue that open-source AI accelerates innovation, reduces market concentration, increases transparency (since anyone can inspect model behavior), and helps level the global playing field by giving smaller firms, academic researchers, and governments outside major AI hubs access to advanced capabilities. Open-source AI can also serve as a check on concentrated power in the AI industry.
Critics counter that releasing model weights publicly makes it difficult or impossible to prevent misuse, such as generating harmful content, enabling novel cyberattacks, or assisting in the development of biological or chemical weapons. Unlike proprietary AI systems, open-source models can be modified and repurposed freely, making misuse harder to track or mitigate after release.
The EU AI Act addresses open-source AI with certain exemptions: open-source GPAI models are largely exempt from the Act's obligations for general-purpose AI, provided they do not pose systemic risk. However, this exemption does not apply to models classified as posing systemic risk (generally those trained with more than 10^25 FLOPs of computation).
Policy discussions in the United States shifted in 2024 toward strategic enablement rather than restriction, driven by recognition that open-source AI contributes to US technological competitiveness. The NTIA (National Telecommunications and Information Administration) published a report in mid-2024 recommending against blanket restrictions on open-weight models while calling for monitoring of risks.
Effective AI governance increasingly depends on multi-stakeholder processes that bring together governments, industry, academia, civil society, and technical communities. No single actor possesses the expertise, authority, or legitimacy to govern AI alone.
Governments bring regulatory authority and democratic legitimacy but often lack the technical expertise to keep pace with AI development. Industry has deep technical knowledge and control over AI development but faces conflicts of interest. Academia and civil society provide independent analysis and represent public interests but lack enforcement power.
Multi-stakeholder models have been central to several major governance initiatives. The OECD AI Principles were developed through extensive consultation with industry, civil society, and technical experts. The UN High-level Advisory Body on AI was deliberately composed of members from government, industry, academia, and civil society across diverse regions. The Global Partnership on AI (GPAI), launched in 2020, explicitly structures its work around multi-stakeholder working groups.
At the national level, multi-stakeholder approaches take the form of public consultations (as in the UK's regulatory approach), advisory committees (such as NIST's AI Safety Institute Consortium in the US, which included over 200 organizations), and standards-development processes (such as those at ISO and IEEE).
Published in December 2023, ISO/IEC 42001 is the world's first AI management system standard. It specifies requirements for establishing, implementing, maintaining, and improving an Artificial Intelligence Management System (AIMS) within organizations of all sizes and across industries. The standard provides a structured way to manage risks and opportunities associated with AI, addressing challenges including ethical considerations, transparency, bias, and continuous learning. Organizations can seek third-party certification to demonstrate compliance with the standard.
The NIST AI RMF, released in January 2023, provides a voluntary framework for managing AI risks throughout the AI system lifecycle. It is organized around four core functions: Govern (establishing organizational AI governance), Map (understanding the context and risks of AI systems), Measure (analyzing and assessing identified risks), and Manage (prioritizing and acting on risks). The Generative AI Profile (NIST AI 600-1), published in July 2024, extends the framework to address risks specific to generative AI, including issues of content provenance, hallucination, data privacy, and environmental impact.
The Institute of Electrical and Electronics Engineers (IEEE) has developed several AI-related standards, including IEEE 7000 (a model process for addressing ethical concerns during system design) and the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. These standards provide technical communities with practical frameworks for embedding ethical considerations into AI development processes.
AI governance faces several persistent challenges that make it one of the most difficult areas of technology policy.
AI capabilities are advancing far faster than governance frameworks can adapt. The development cycle for frontier AI models is measured in months, while legislative processes typically take years. This temporal mismatch means that regulations often address the technology as it existed when drafting began rather than as it exists when rules take effect. The rapid proliferation of generative AI capabilities in 2023 and 2024, for example, caught many regulatory frameworks off guard.
AI development is a global activity, but governance remains primarily national or regional. Different jurisdictions have adopted fundamentally different approaches reflecting distinct values, economic priorities, and political systems. The EU's comprehensive, rights-based regulation, the US's innovation-focused approach, and China's state-directed model present challenges for companies operating across borders and for efforts to establish common international standards. A race in which governments compete to attract AI industries through favorable regulatory environments may undermine necessary oversight.
Governance frameworks increasingly rely on risk-based approaches, but defining, measuring, and comparing AI risks remains technically challenging. There is no widely accepted methodology for quantifying the risks posed by a given AI system, making it difficult to consistently apply risk tiers (as in the EU AI Act) or determine when a model crosses a capability threshold requiring additional safeguards.
Even where binding rules exist, enforcement remains difficult. AI systems are technically complex, making it challenging for regulators to assess compliance. Many regulators lack the technical capacity and resources needed for effective oversight. Voluntary commitments, which form a large part of the current governance landscape, lack enforcement mechanisms entirely, as demonstrated by the uneven compliance with the White House voluntary commitments.
Governments face a genuine tension between promoting AI innovation (for economic competitiveness, scientific progress, and public benefit) and imposing guardrails to prevent harm. Too much regulation risks stifling beneficial AI development and driving it to less regulated jurisdictions. Too little risks allowing harmful applications to proliferate unchecked. The rebranding of AI Safety Institutes in both the US and UK in 2025 reflected this tension, as both governments sought to signal a stronger emphasis on innovation.
Most AI development is concentrated in a small number of wealthy countries and large corporations. Governance processes often reflect this concentration, with developing nations and smaller actors having limited influence over the rules that shape AI's global impact. The UN's governance initiatives, including the Global Digital Compact and the AI capacity development proposals, aim to address this imbalance, but progress remains slow.
AI governance continues to evolve rapidly. Several trends are likely to shape its trajectory in the coming years.
The establishment of dedicated AI governance institutions, such as AI Safety Institutes and the proposed international bodies under the Global Digital Compact, represents a move toward more permanent and specialized governance infrastructure. The International Network of AI Safety Institutes, if sustained, could become a key mechanism for technical coordination across borders.
The EU AI Act's phased implementation through 2027 will serve as a major test of whether comprehensive AI legislation can be effectively enforced and whether it produces the intended outcomes without unduly restricting innovation. Other jurisdictions, including Brazil, Canada, and India, are watching closely as they develop their own legislative approaches.
Corporate governance practices, including responsible scaling policies and safety evaluations, are likely to continue evolving. Whether these remain genuinely effective or become primarily performative will depend in part on the strength of external accountability mechanisms, including regulatory oversight, independent auditing, and public scrutiny.
The governance of increasingly capable AI systems, including those with advanced reasoning, autonomous action, and potential self-improvement capabilities, will pose new challenges that existing frameworks may not adequately address. The interaction between open-source release practices and safety considerations will remain a focal point of debate.