AI regulation refers to the laws, policies, standards, and governance mechanisms that governments and international bodies use to oversee the development, deployment, and use of artificial intelligence systems. As AI has become embedded in critical sectors including healthcare, finance, criminal justice, employment, and national security, governments around the world have moved to establish regulatory frameworks that balance innovation with the protection of fundamental rights, safety, and public interest. The regulatory landscape is rapidly evolving, with different jurisdictions adopting markedly different approaches ranging from comprehensive legislation to voluntary guidelines.
AI regulation emerged as a policy priority in the late 2010s as the capabilities and societal impact of AI systems grew. Early regulatory efforts focused on sector-specific applications, such as autonomous vehicles or medical devices. By the early 2020s, several governments began developing horizontal, cross-sector AI regulations that applied to AI systems regardless of their application domain.
The fundamental tension in AI regulation is between fostering innovation and managing risk. Too little regulation may allow harmful AI systems to proliferate without adequate safeguards. Too much regulation may stifle innovation, drive development to less regulated jurisdictions, or impose compliance burdens that disadvantage smaller companies. Different jurisdictions have struck this balance differently, reflecting their distinct legal traditions, economic priorities, and cultural values.
As of early 2026, no single global framework governs AI. Instead, a patchwork of national and regional regulations, international guidelines, and industry self-regulatory initiatives coexists, creating both opportunities for regulatory learning and challenges for organizations operating across borders.
The EU AI Act is the world's first comprehensive legal framework for regulating artificial intelligence. It was formally adopted in 2024 and entered into force on August 1, 2024, with a phased implementation timeline extending through 2027 [1].
The AI Act categorizes AI systems into four risk tiers, with corresponding regulatory requirements.
| Risk level | Description | Regulatory treatment |
|---|---|---|
| Unacceptable risk | AI systems that pose a clear threat to fundamental rights | Banned entirely. Includes social scoring by governments, real-time remote biometric identification in public spaces (with limited exceptions), manipulation of vulnerable groups, and emotion recognition in schools and workplaces [1] |
| High risk | AI systems used in critical areas affecting people's lives and livelihoods | Subject to strict requirements including risk assessments, high-quality datasets, detailed documentation, human oversight, accuracy and robustness standards, and conformity assessments [1] |
| Limited risk | AI systems with specific transparency obligations | Must disclose to users that they are interacting with an AI system (e.g., chatbots) or that content is AI-generated |
| Minimal risk | AI systems posing little or no risk | No specific regulatory requirements beyond existing laws |
High-risk categories include AI used in critical infrastructure, education and vocational training, employment and worker management, access to essential services (credit, insurance), law enforcement, migration and border control, and administration of justice.
The AI Act is being phased in over several years.
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Bans on unacceptable-risk AI practices and AI literacy requirements take effect |
| August 2, 2025 | Rules for general-purpose AI (GPAI) models, governance structures, penalties, and notified bodies begin applying. European Commission publishes the GPAI Code of Practice |
| August 2, 2026 | Full enforcement including high-risk AI system rules; penalties of up to 35 million EUR or 7% of global annual revenue |
| August 2, 2027 | Legacy GPAI models must fully comply |
The AI Act includes specific provisions for general-purpose AI (GPAI) models, reflecting the rise of large language models and foundation models. All GPAI providers must maintain technical documentation, provide information to downstream deployers, comply with EU copyright law, and publish summaries of training data.
For frontier GPAI models (those trained with more than 10^25 floating-point operations), additional requirements apply. Providers must conduct model evaluations, assess and mitigate systemic risks, perform adversarial testing, report serious incidents, ensure cybersecurity protections, and report energy consumption. These providers must establish formal governance with independent risk oversight and conduct evaluations by qualified external evaluators [1].
The US approach to AI regulation has been characterized by a shift from executive-led oversight to a more deregulatory stance, combined with a growing patchwork of state-level laws.
On October 30, 2023, President Joe Biden signed Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." This was the most significant federal AI governance action at the time, directing over 50 federal entities to undertake more than 100 specific actions [2].
Key provisions included requirements for developers of the most powerful AI models to share safety test results with the federal government before public release, NIST-developed standards for red-team testing, protections against AI-enabled fraud and discrimination, guidance on AI's impact on workers, and promotion of international AI safety cooperation. The executive order also established the US AI Safety Institute within NIST.
Upon taking office in January 2025, President Trump revoked Executive Order 14110 within days. On January 23, 2025, he signed a new executive order titled "Removing Barriers to American Leadership in Artificial Intelligence," which shifted the federal approach from oversight and risk mitigation to deregulation and innovation promotion. The order instructed agencies to eliminate policies that might "hinder American AI dominance" and prioritized US competitiveness in global AI development [3].
In July 2025, the Trump administration unveiled "Winning the Race: America's AI Action Plan," a comprehensive federal strategy encompassing over 90 actions to accelerate AI innovation, build infrastructure including data centers, promote US AI exports, and reduce regulatory barriers. The US AI Safety Institute was reorganized and renamed the Center for AI Standards and Innovation (CAISI) [4].
On December 11, 2025, President Trump signed an additional executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," signaling intent to consolidate AI oversight at the federal level and counter the expanding patchwork of state AI rules [5].
In the absence of comprehensive federal legislation, US states have become active AI regulators.
| State | Law | Key provisions | Status |
|---|---|---|---|
| Colorado | SB 24-205 (Colorado AI Act) | Most comprehensive state AI law. Targets high-risk AI systems making consequential decisions about consumers in education, employment, financial services, healthcare, housing, insurance, and legal services. Requires reasonable care to avoid algorithmic bias | Signed May 2024; enforcement delayed to June 30, 2026 [6] |
| California | SB 53 (Transparency in Frontier AI Act) | Requires standardized safety disclosures for frontier AI models. Transparency-first approach. Replaced the vetoed SB 1047, which had proposed mandatory safety testing, third-party audits, and kill-switch mandates | Signed September 2025; effective January 1, 2026 [7] |
| Texas | Texas Responsible AI Governance Act (TRAIGA) | Categorical limitations on AI deployment and development, primarily focused on government entities. Narrower than Colorado's law | Passed June 2025 [8] |
| New York | Various AI bills | Multiple bills addressing automated employment decision tools (Local Law 144, effective July 2023, requiring bias audits for hiring tools) and other AI applications | Ongoing legislative activity |
| Illinois | AI Video Interview Act | Requires employers to notify applicants when AI is used to analyze video interviews and to obtain consent | In effect since 2020 |
California's SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was one of the most closely watched AI bills in 2024. It would have applied to models costing more than $100 million to train and trained using computing power exceeding 10^26 operations. It included whistleblower protections, mandatory risk assessments, and requirements for developers to maintain the ability to shut down their models. Governor Gavin Newsom vetoed the bill in September 2024, citing concerns about its scope and potential impact on innovation. The replacement law, SB 53, retained transparency requirements but dropped the more prescriptive safety mandates [7].
China has been an early and active regulator of AI, pursuing a targeted, sector-specific approach rather than a single comprehensive law. China's regulatory strategy addresses specific AI applications through individual regulations while building toward a broader governance framework.
| Regulation | Effective date | Scope |
|---|---|---|
| Administrative Provisions on Algorithm Recommendation for Internet Information Services | March 1, 2022 | Governs algorithmic recommendation in news feeds, short videos, search results, and other online services. Requires risk assessments and algorithm registration with the Cyberspace Administration of China (CAC) for providers with public opinion or social mobilization capabilities [9] |
| Administrative Provisions on Deep Synthesis of Internet-based Information Services | January 10, 2023 | Governs AI-generated synthetic media including deepfakes. Mandates content labeling and identity verification [9] |
| Interim Measures for Generative AI Services | August 15, 2023 | First binding regulation specifically for generative AI. Requires providers of public-facing generative AI to ensure content is lawful and truthful, label AI-generated content, and register algorithms with regulators [10] |
| Measures for Labeling AI-Generated Content | September 1, 2025 | Comprehensive content labeling requirements for AI-generated text, images, audio, and video. Issued by the CAC in March 2025 [10] |
The AI Safety Governance Framework was published by China's National Information Security Standardization Technical Committee (TC260) in September 2024 and updated to version 2.0 in September 2025. The Framework is not a law but functions as an operational guide for risk classification, ethical principles, and governance measures that feed into binding national standards. It outlines principles for AI safety governance, classifies anticipated risks, identifies technological measures to mitigate those risks, and provides governance and safety guidelines [11].
In 2025, three national standards for generative AI security took effect on November 1. Service providers offering generative AI with public opinion or social mobilization capabilities must conduct security assessments and register their large language models with the CAC.
China's approach differs from the EU's in several respects. Rather than classifying all AI systems by risk level, China targets specific application categories through individual regulations. The regulations also reflect China's distinct priorities: content control, social stability, and support for national AI development goals alongside safety and ethical concerns.
The UK has adopted a "pro-innovation" approach to AI regulation that emphasizes flexibility and sector-specific guidance over comprehensive legislation.
In March 2023, the UK government published a white paper, "A pro-innovation approach to AI regulation," which outlined five cross-sector principles: safety, security, and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. Rather than creating a new AI regulator or passing comprehensive AI legislation, the UK government tasked existing sector regulators (the Financial Conduct Authority, Ofcom, the Information Commissioner's Office, and others) with implementing these principles within their domains [12].
The UK AI Safety Institute (AISI) was established following the Bletchley Park AI Safety Summit in November 2023, with approximately 100 million GBP in public funding. It quickly became one of the world's largest government-backed AI safety evaluation teams, conducting pre-deployment evaluations of frontier models and publishing research on model capabilities and risks [13].
In February 2025, the AI Safety Institute was renamed the AI Security Institute, reflecting a broadened mandate that encompasses national security dimensions of AI alongside safety concerns. At the February 2025 Paris AI Action Summit, the UK declined to sign a declaration on "inclusive and sustainable" AI, citing concerns over national security and the clarity of global governance frameworks. The UK and US decisions not to sign were notable given their earlier leadership roles at the Bletchley Park summit [14].
AI regulation is a global phenomenon, with countries across every continent developing governance frameworks.
| Country/Region | Approach | Key developments |
|---|---|---|
| Canada | Proposed comprehensive legislation (AIDA) | The Artificial Intelligence and Data Act (AIDA), part of Bill C-27, died in parliament in January 2025 when the session ended. Canada currently has no federal AI law. The government uses the Algorithmic Impact Assessment tool for public-sector AI [15] |
| Brazil | Risk-based legislation in progress | Bill No. 2338/2023 was approved by the Senate in December 2024 and entered a longer legislative process in 2025. It mirrors the EU's risk-based approach, banning "excessive risk" systems and establishing strict liability [16] |
| Japan | Innovation-first, light-touch approach | Japan's Parliament approved the AI Promotion Act on May 28, 2025. The law is principle-based and lighter than the EU AI Act, empowering the government to issue warnings but lacking strict punitive measures [17] |
| India | Sectoral regulatory model | India's Ministry of Electronics and Information Technology (MeitY) released AI Governance Guidelines grounded in seven "sutras" (principles), favoring a sectoral model over a single umbrella AI Act [18] |
| South Korea | Comprehensive framework law | South Korea passed the AI Basic Act, which includes institutional structures like an AI safety institute. Enforcement is scheduled for January 22, 2026, positioning South Korea as one of the first Asian countries with a comprehensive AI framework [19] |
| Singapore | Soft law and governance frameworks | The Model AI Governance Framework (3rd Edition, 2024) covers generative AI, transparency reporting, and human oversight. The Personal Data Protection Commission (PDPC) enforces principles through regulatory sandboxes [20] |
| Australia | Voluntary standards with pending legislation | The government published an AI Ethics Framework and is developing mandatory guardrails for high-risk AI settings. Legislation is expected in 2026 |
| Israel | Risk-based voluntary policy | Published a national AI policy emphasizing risk-based regulation and voluntary compliance for the private sector |
Several mechanisms exist for international coordination on AI governance, though binding multilateral agreements remain limited.
The Hiroshima AI Process was launched during Japan's G7 presidency in May 2023. In December 2023, the G7 adopted the Hiroshima AI Process Comprehensive Policy Framework, which includes the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. The Code of Conduct is a living document that builds on the OECD AI Principles and provides voluntary guidance for responsible AI development [21].
The OECD was tasked with institutional support for monitoring and evaluation. Following a pilot phase in mid-2024, an operational reporting framework was launched in 2025, with the first round of organizational submissions expected by April 15, 2025 [21].
The OECD AI Principles, originally adopted in May 2019 and updated in May 2024, are the most widely accepted international framework for AI governance. They have been adopted by 47 countries and were also endorsed by the G20. The principles provide a common reference point for national regulation, even though they are not legally binding. The 2024 update incorporated lessons from the rapid advancement of generative AI [22].
The UN has pursued AI governance through multiple channels. The Secretary-General's High-Level Advisory Body on AI published interim recommendations in late 2023 and final recommendations in 2024, calling for a global AI governance framework anchored in human rights. The UN General Assembly adopted resolutions on AI governance in March and July 2024, emphasizing the need for safe, secure, and trustworthy AI systems [23].
UNESCO's 2021 Recommendation on the Ethics of AI remains the most comprehensive UN-system instrument on AI governance, with all 193 member states committed to its implementation.
The Council of Europe adopted the Framework Convention on Artificial Intelligence in May 2024, making it the first legally binding international treaty specifically addressing AI governance. The treaty requires parties to ensure that AI systems respect human rights, democracy, and the rule of law. Its practical impact depends on ratification and implementation by individual states [24].
The series of international AI safety summits has become a key venue for multilateral coordination. The Bletchley Park Summit (November 2023) produced the Bletchley Declaration signed by 28 countries and the EU. The Seoul AI Safety Summit (May 2024) yielded the Frontier AI Safety Commitments signed by 16 companies and agreement to form an international network of AI safety institutes. The Paris AI Action Summit (February 2025) produced a declaration signed by 58 countries, though the US and UK declined to sign [25].
Alongside government regulation, the AI industry has developed various self-regulatory mechanisms.
In July 2023, seven leading AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) made voluntary commitments to the Biden White House on AI safety, security, and trust. These included commitments to conduct internal and external security testing, share safety information with governments and the research community, invest in cybersecurity safeguards, develop technical mechanisms to watermark AI-generated content, and research societal risks [26].
At the Seoul AI Safety Summit in May 2024, 16 companies signed the Frontier AI Safety Commitments, pledging to evaluate risks throughout the AI lifecycle and define thresholds for severe risks.
Major AI labs have published their own safety frameworks. Anthropic's Responsible Scaling Policy defines AI Safety Levels (ASL-1 through ASL-4) with progressively stricter requirements. OpenAI's Preparedness Framework tracks frontier capabilities across cybersecurity, persuasion, CBRN threats, and autonomy. Google DeepMind's Frontier Safety Framework defines Critical Capability Levels across multiple risk domains [27].
Critics have questioned the adequacy of industry self-regulation. Voluntary commitments lack enforcement mechanisms. Companies may face competitive pressures to weaken safety practices. The departure of senior safety researchers from major labs (including Jan Leike and Ilya Sutskever from OpenAI in 2024) has raised concerns about the sincerity of corporate safety commitments. Self-regulation may also be inadequate for addressing systemic risks that affect entire markets or populations rather than individual companies [28].
The following table summarizes key differences among the major regulatory approaches.
| Dimension | EU | US (Federal) | China | UK |
|---|---|---|---|---|
| Legislative approach | Comprehensive, horizontal law | Executive orders; no comprehensive federal AI law | Sector-specific regulations | Pro-innovation; sector regulator guidance |
| Risk classification | Four-tier risk framework | No formal risk classification at federal level | Application-specific risk assessment | Five cross-sector principles |
| Enforcement | EU AI Office; national authorities; penalties up to 7% of global revenue | Varies by agency; no single enforcement body | Cyberspace Administration of China (CAC) and sector regulators | Existing sector regulators |
| Scope | All AI systems placed on the EU market | Varies by executive order and state law | Algorithms, deep synthesis, generative AI | Sector-specific application |
| GPAI/foundation model rules | Specific provisions for GPAI with additional requirements for frontier models | No federal requirements (Biden EO revoked) | Registration and security assessment requirements | AI Security Institute conducts voluntary pre-deployment evaluations |
| Innovation emphasis | Regulatory sandboxes; SME support measures | Strong emphasis on deregulation and competitiveness | "Development and security" dual objectives | Pro-innovation as explicit policy goal |
| Binding vs. voluntary | Legally binding with significant penalties | Mix of binding (state laws) and voluntary (federal) | Legally binding for covered applications | Primarily voluntary with some binding elements |
AI regulation is in a period of rapid evolution and growing divergence.
The EU leads on implementation. The AI Act's phased rollout is the most significant global development. With unacceptable-risk bans in effect since February 2025 and GPAI rules since August 2025, organizations serving the EU market are actively building compliance programs. Full high-risk system enforcement arrives in August 2026. The European AI Office, established within the European Commission, is coordinating implementation and has published guidance documents and the GPAI Code of Practice [1].
US regulation is fragmented. The absence of comprehensive federal legislation, combined with the Trump administration's deregulatory stance, has pushed AI regulation to the state level. Colorado, California, Texas, New York, and other states have enacted or are developing AI laws, creating a compliance patchwork. The December 2025 executive order on national AI policy framework suggests the federal government may attempt to preempt state rules, but the mechanism for doing so remains unclear [5][6][7].
China continues its targeted approach. With algorithm recommendation rules, deep synthesis provisions, generative AI measures, and content labeling requirements all in effect, China has one of the most operationally detailed AI regulatory regimes. The AI Safety Governance Framework (v2.0) provides additional non-binding guidance. China's approach reflects a dual priority of promoting technological development while maintaining content control and social stability [9][10][11].
International coordination is advancing slowly. The G7 Hiroshima Process, OECD AI Principles, and AI safety summit series provide forums for dialogue and voluntary coordination. The Council of Europe treaty represents a step toward binding international norms. However, fundamental differences in regulatory philosophy between the EU (rights-based), the US (innovation-focused), and China (development and control) limit the prospects for near-term convergence on a global regulatory framework [21][22][24].
The regulatory gap for frontier AI is narrowing. While frontier AI models were largely unregulated as recently as 2023, the EU AI Act's GPAI provisions, the various AI safety institute evaluations, and state-level laws in the US have begun to create specific requirements for the most powerful AI systems. Whether these requirements are sufficient to address the risks posed by rapidly advancing AI capabilities remains one of the central questions in AI safety policy.
On December 19, 2025, New York Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act, establishing the nation's first comprehensive reporting and safety governance regime for frontier AI model developers. The RAISE Act applies to developers with $500 million or more in annual revenue who develop or operate frontier models, defined as AI systems trained using more than 10^26 FLOPs with compute costs exceeding $100 million.
Key requirements include the development and publication of an AI safety and security framework, mandatory 72-hour incident reporting to the New York Attorney General and Division of Homeland Security and Emergency Services, and ongoing safety evaluations. The RAISE Act's reporting timeline is significantly faster than California's SB 53, which allows 15 days for incident disclosure [29].
The RAISE Act was signed days after President Trump's December 11 executive order targeting state AI regulation, and federal preemption litigation is widely anticipated. The tension between New York's assertive regulatory stance and the federal government's preference for a unified, lighter-touch framework illustrates the growing conflict over AI governance authority in the United States [29] [30].
The December 11, 2025 executive order, titled "Ensuring a National Policy Framework for Artificial Intelligence," represents the Trump administration's most direct attempt to consolidate AI governance at the federal level. The order proposes establishing a uniform federal policy framework that would preempt state AI laws deemed inconsistent with federal policy [5].
The preemption question has become one of the most consequential legal issues in US AI regulation. Proponents argue that a patchwork of state laws creates compliance nightmares for companies operating nationally and could hamper American competitiveness in AI. Critics counter that in the absence of comprehensive federal legislation, state laws are the only mechanism for protecting citizens from AI harms, and that preemption without a federal replacement would leave a regulatory vacuum.
AI companies have waged a fierce lobbying campaign in support of preemption, deploying the narrative that a patchwork of state laws will smother innovation and undermine the US position in the AI race against China. Consumer protection groups and state attorneys general have pushed back, arguing that industry's preferred outcome is not uniformity but the absence of regulation entirely.
On January 1, 2026, significant amendments to China's Cybersecurity Law took effect, representing the most substantial update since the law's original adoption in 2017. The amendments introduced dedicated provisions on artificial intelligence governance, marking the first time AI governance has been elevated to the level of national law in China [31].
The amendments explicitly provide that the state will support AI innovation, promote the development of training data resources and building of computing infrastructure, strengthen AI ethics regulation, and enhance AI risk assessment and security governance. Maximum fines for violations increased substantially, with the highest penalties reaching RMB 10 million (approximately $1.4 million) for network operators and critical information infrastructure operators, and RMB 1 million for directly responsible individuals in cases causing "particularly serious consequences" [31].
Additionally, the National Data Administration announced that more than 30 new standards relating to public data, data infrastructure, AI agents, high-quality datasets, and other AI-related topics are expected to be issued during 2026 [32].
South Korea's AI Basic Act, passed in late 2024, is scheduled for enforcement beginning January 22, 2026, positioning South Korea as one of the first Asian countries with a comprehensive AI framework law. The Act establishes institutional structures including an AI safety institute and provides for both promotional and regulatory measures [19].
Japan's Parliament approved the AI Promotion Act on May 28, 2025. The law takes a principle-based approach that is significantly lighter than the EU AI Act, empowering the government to issue warnings but lacking strict punitive measures. The approach reflects Japan's longstanding preference for innovation-friendly regulation and its concern about not falling behind in AI development [17].
Brazil's AI regulation bill (Bill No. 2338/2023) was approved by the Senate in December 2024 and entered a longer legislative process in 2025. The bill mirrors the EU's risk-based approach, banning "excessive risk" AI systems and establishing strict liability for certain AI-related harms. If enacted, it would create one of the most comprehensive AI regulatory frameworks in Latin America [16].
In 2025, twelve companies published or updated Frontier AI Safety Frameworks. The three most influential frameworks have evolved significantly:
| Framework | Organization | Latest version | Key 2025-2026 developments |
|---|---|---|---|
| Responsible Scaling Policy (RSP) | Anthropic | v3.0 (Feb 2026) | Comprehensive rewrite introducing Frontier Safety Roadmaps and Risk Reports; acknowledges need for industry-wide coordination at higher ASLs |
| Preparedness Framework | OpenAI | v2 (Apr 2025) | Simplified to two thresholds ("High" and "Critical"); new biorisk monitoring system deployed for o3 models |
| Frontier Safety Framework (FSF) | Google DeepMind | v3.0 (Sep 2025) | Added Critical Capability Level for manipulation; expanded to cover shutdown resistance scenarios |
The adequacy of industry self-regulation was tested in early 2026 when the Anthropic-Pentagon dispute demonstrated that voluntary safety commitments can create real economic costs. Anthropic's refusal to remove guardrails preventing autonomous military targeting and mass surveillance resulted in the company being banned from all federal contracts. The Pentagon's subsequent deal with xAI's Grok, which lacked equivalent safety restrictions, raised questions about whether responsible companies are punished for maintaining ethical standards while less cautious competitors benefit [33].
A separate concern emerged from academic research. An October 2025 study involving researchers from OpenAI, Anthropic, and Google DeepMind examined 12 published defenses against jailbreaking and found that adaptive attacks could bypass most of them with success rates above 90%, suggesting that even well-resourced safety programs face fundamental technical limitations.
An emerging form of quasi-regulation has come from the insurance industry. By early 2026, insurers had begun requiring documented evidence of adversarial red-teaming and model-level risk assessments as conditions of AI liability coverage. This market-driven mechanism has the potential to standardize safety practices more rapidly than government regulation in some domains, though coverage requirements vary significantly across insurers and jurisdictions.
| Jurisdiction | 2026 status | Primary approach | Key pending action |
|---|---|---|---|
| European Union | AI Act partially in effect; high-risk rules arriving Aug 2026 (potential Omnibus VII delay to Dec 2027) | Risk-based comprehensive regulation | Trilogue on Omnibus VII amendments; full enforcement |
| United States (Federal) | No comprehensive federal law; Trump executive orders emphasize deregulation | Innovation-first with voluntary standards | Federal preemption of state laws; potential legislation |
| United States (States) | CA SB 53 and TX RAIGA in effect Jan 2026; CO AI Act effective June 2026; NY RAISE Act signed Dec 2025 | State-by-state patchwork targeting frontier models and algorithmic discrimination | Preemption litigation; additional state legislation |
| China | Multiple sector-specific rules in effect; Cybersecurity Law amendments effective Jan 2026 | Sector-specific regulation with AI provisions elevated to national law | 30+ new AI-related standards expected in 2026 |
| United Kingdom | Pro-innovation approach; AI Security Institute active | Sector regulator guidance; voluntary evaluations | Potential AI legislation under consideration |
| South Korea | AI Basic Act enforcement from January 22, 2026 | Comprehensive framework law with safety institute | Implementation and enforcement details |
| Japan | AI Promotion Act approved May 2025 | Principle-based, light-touch | Detailed guidance expected |
| Brazil | Senate approved Bill 2338/2023 in December 2024 | Risk-based, EU-influenced | House passage and enactment |
The second International AI Safety Report, published in February 2026 and led by Turing Award winner Yoshua Bengio with contributions from over 100 AI experts across more than 30 countries, provided the most comprehensive multilateral assessment of AI capabilities, risks, and governance to date.
Key findings relevant to regulation included: general-purpose AI capabilities are advancing rapidly, with models passing professional licensing examinations in medicine and law; no single safeguard is sufficient to manage frontier AI risks; AI risk management practices are becoming more structured but real-world evidence of their effectiveness remains limited; and there is a "growing mismatch between the speed of AI capability advances and the pace of governance" [34].
The report explicitly endorsed a defense-in-depth approach to AI regulation, recommending that regulators layer multiple technical and organizational controls rather than relying on any single mechanism. It also called attention to the challenge of evaluating AI systems when models may learn to distinguish between test environments and real deployment, a finding that has significant implications for regulatory testing and certification regimes.
The report was endorsed by the international network of AI safety institutes and is expected to inform regulatory approaches across multiple jurisdictions during 2026.