The EU AI Act (formally, Regulation (EU) 2024/1689) is a regulation of the European Union that establishes a comprehensive legal framework for the development, deployment, and use of artificial intelligence systems. It is the first major AI-specific legislation enacted by any jurisdiction worldwide and has been widely compared to the General Data Protection Regulation (GDPR) in its ambition and potential global reach [1]. The regulation adopts a risk-based approach, classifying AI systems according to the level of harm they may pose and imposing obligations that scale with that risk. It entered into force on 1 August 2024, with provisions phased in between February 2025 and August 2027 [2].
The roots of the EU AI Act trace back to the European Commission's publication of a White Paper on Artificial Intelligence in February 2020, which laid out policy options for regulating AI in Europe. Building on that groundwork, the Commission released its formal legislative proposal on 21 April 2021 [3]. The draft regulation introduced the risk-based classification system that would become the Act's defining feature, drawing on earlier work by the High-Level Expert Group on AI.
The proposal then entered the ordinary legislative procedure, moving through both co-legislators: the Council of the European Union and the European Parliament. The Council adopted its general approach (negotiating position) in December 2022, while the Parliament's Internal Market and Civil Liberties committees worked on extensive amendments throughout 2022 and early 2023.
On 14 June 2023, the European Parliament adopted its negotiating position with 499 votes in favour, 28 against, and 93 abstentions [4]. Parliament's amendments were significant. They aligned the definition of AI systems with the OECD definition, expanded the list of prohibited practices, added requirements for fundamental rights impact assessments for high-risk systems, and introduced a layered approach to regulating general-purpose AI models, including generative AI systems like ChatGPT. The emergence of powerful foundation models in 2022 and 2023 forced legislators to address capabilities that the original 2021 proposal had not anticipated.
Trilogue negotiations between the Parliament, the Council, and the Commission took place across multiple rounds in June, July, September, October, and December 2023. After a marathon 36-hour negotiating session, the three institutions reached a provisional political agreement on 9 December 2023 [5].
The final text was approved by the European Parliament on 13 March 2024 and unanimously endorsed by the Council on 21 May 2024. The regulation was formally signed on 13 June 2024, published in the Official Journal of the European Union on 12 July 2024, and entered into force on 1 August 2024 [6].
The AI Act's regulatory architecture rests on a four-tier classification system. The level of obligation imposed on providers and deployers depends on the category of risk that an AI system falls into. The framework deliberately leaves most AI systems, those posing minimal risk, largely unregulated while concentrating oversight on applications that could cause serious harm.
| Risk Level | Regulatory Treatment | Examples |
|---|---|---|
| Unacceptable risk | Prohibited outright | Social scoring systems by public authorities; real-time remote biometric identification in public spaces (with narrow law enforcement exceptions); AI that exploits vulnerabilities of children, elderly, or disabled persons; subliminal manipulation techniques that cause harm; untargeted scraping of facial images from the internet or CCTV; emotion recognition in workplaces and schools |
| High risk | Subject to strict requirements before market placement and throughout lifecycle | AI in critical infrastructure (water, gas, electricity supply); systems used in education admissions and assessment; employment and worker management tools (recruitment, screening, performance evaluation); credit scoring and insurance pricing; law enforcement tools for risk assessment and evidence evaluation; migration and border control systems; AI assisting judicial authorities |
| Limited risk (transparency obligations) | Must disclose AI involvement to users | Chatbots and conversational AI; deepfake generation systems; AI-generated text published to inform the public; emotion recognition systems (where not prohibited) |
| Minimal risk | No specific obligations (voluntary codes of conduct encouraged) | AI-enabled video games; spam filters; inventory management systems; most general consumer applications |
This risk-based design was deliberately chosen over more prescriptive approaches. Rather than regulating the technology itself, the Act focuses on specific use cases and their potential impact on health, safety, and fundamental rights [7].
Article 5 of the AI Act defines the practices that the EU considers to pose an unacceptable risk and are therefore banned outright. These prohibitions became enforceable on 2 February 2025, the first provisions of the Act to take effect [8].
The prohibited practices include:
Social scoring: AI systems used by public authorities (or on their behalf) to evaluate or classify individuals based on their social behaviour or personality characteristics, where that classification leads to detrimental treatment in contexts unrelated to the original data collection, or treatment that is disproportionate to the behaviour.
Exploitation of vulnerabilities: AI systems that deploy subliminal techniques, or deliberately exploit the vulnerabilities of specific groups (such as children, persons with disabilities, or economically disadvantaged individuals), in ways that are likely to cause physical or psychological harm.
Real-time remote biometric identification in public spaces: The use of real-time biometric identification systems (primarily facial recognition) by law enforcement in publicly accessible spaces is prohibited as a general rule. Narrow exceptions exist for three scenarios: searching for specific victims of kidnapping, trafficking, or sexual exploitation; preventing a genuine and imminent threat to life or a foreseeable terrorist attack; and locating or identifying suspects of serious criminal offences. Each exception requires prior judicial or independent administrative authorization.
Untargeted facial image scraping: AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage are banned.
Emotion recognition in workplaces and educational institutions: AI systems designed to infer the emotions of individuals in the workplace or in educational settings are prohibited, with limited exceptions for medical or safety-related purposes.
Biometric categorization using sensitive attributes: AI systems that categorize individuals based on biometric data to deduce or infer race, political opinions, trade union membership, religious beliefs, or sexual orientation are prohibited. Limited exceptions apply for labelling or filtering biometric datasets acquired lawfully and for certain law enforcement uses.
Predictive policing of individuals: AI systems that assess the risk that an individual will commit a criminal offence based solely on profiling or personality traits are prohibited. Systems that augment human assessments based on objective, verifiable facts directly linked to criminal activity remain permissible.
In February 2025, the European Commission published guidelines clarifying the scope and interpretation of these prohibited practices, providing practical examples for organizations to assess their compliance [9].
The Act imposes the most detailed obligations on high-risk AI systems. These systems are identified through two pathways. First, AI systems that serve as safety components of products already covered by existing EU harmonization legislation (such as medical devices, machinery, or aviation equipment) automatically qualify as high risk under Article 6(1). Second, standalone AI systems deployed in the sensitive domains listed in Annex III qualify as high risk under Article 6(2).
The eight domains enumerated in Annex III are:
Providers of high-risk AI systems must satisfy a comprehensive set of requirements before placing their systems on the EU market:
Risk management system (Article 9): Providers must establish, implement, and maintain a risk management system that operates as a continuous, iterative process throughout the AI system's entire lifecycle. The system must identify and analyze known and reasonably foreseeable risks, estimate and evaluate risks that may emerge when the system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, and adopt appropriate risk mitigation measures [10].
Data and data governance (Article 10): Training, validation, and testing datasets must meet quality criteria. Data must be relevant, sufficiently representative, and, to the best extent possible, free of errors and complete. Providers must consider the geographical, contextual, behavioural, or functional setting within which the system is intended to be used.
Technical documentation (Article 11): Before a high-risk system is placed on the market, the provider must draw up technical documentation demonstrating compliance. This documentation must be kept up to date.
Record-keeping (Article 12): High-risk systems must be designed to enable automatic logging of events ("logs") throughout the system's lifetime. Deployers must retain automatically generated logs for at least six months.
Transparency and provision of information to deployers (Article 13): Systems must be designed to operate with sufficient transparency for deployers to interpret and use the system's output appropriately. Instructions for use must include information about the provider's identity, the system's characteristics, capabilities, limitations, performance metrics, and known or foreseeable circumstances that could lead to risks.
Human oversight (Article 14): High-risk systems must be designed so that they can be effectively overseen by natural persons during their period of use. Individuals assigned to oversight roles must have the necessary competence, training, and authority. They must be able to fully understand the system's capabilities and limitations, correctly interpret outputs, and decide not to use the system, override its output, or stop it entirely [11].
Accuracy, robustness, and cybersecurity (Article 15): Systems must achieve an appropriate level of accuracy, robustness, and cybersecurity, performing consistently throughout their lifecycle. They must be resilient against errors, faults, and attempts by unauthorized third parties to exploit vulnerabilities.
Deployers of high-risk systems also face their own obligations under Article 26. They must use systems in accordance with the provider's instructions, ensure human oversight by qualified persons, monitor system operation for risks, report serious incidents, and (for public bodies and certain private entities) conduct fundamental rights impact assessments before deploying the system.
Conformity assessment procedures vary depending on the type of high-risk system. For most Annex III systems, providers may conduct self-assessment against the requirements, while certain biometric identification systems require third-party assessment by a notified body [12].
One of the most consequential additions to the AI Act came during the 2023 legislative negotiations, when the Parliament and Council introduced provisions specifically targeting general-purpose AI (GPAI) models. The original 2021 Commission proposal had not addressed foundation models or large language models directly, as these technologies had not yet become widely prominent. The rapid rise of systems like ChatGPT in late 2022 and early 2023 made it clear that the legislation needed to account for powerful, general-purpose models that could be integrated into countless downstream applications [13].
The Act defines a GPAI model as an AI model that displays significant generality, is capable of competently performing a wide range of distinct tasks regardless of how it is placed on the market, and can be integrated into a variety of downstream systems or applications. The European Commission's guidelines, published in 2025, further clarified that models exceeding 10^23 FLOPs in training compute are presumptively considered GPAI models within the meaning of the Act [14].
All providers of GPAI models must:
The Act introduces a specific category: GPAI models with systemic risk. A model is presumed to pose systemic risk when the cumulative amount of compute used for its training exceeds 10^25 FLOPs. This is a rebuttable presumption; the Commission may also designate a model as posing systemic risk based on other criteria, such as the number of registered end users, the degree of model autonomy, or the extent of its impact on the internal market [15].
Providers of GPAI models with systemic risk face additional obligations:
The Act provides for voluntary codes of practice as a mechanism for GPAI model providers to demonstrate compliance with their obligations. The AI Office led a multi-stakeholder process to draft these codes. The General-Purpose AI Code of Practice was finalized and published in mid-2025, covering transparency, copyright compliance, safety evaluation, and risk mitigation [16]. As of early 2026, a separate Code of Practice on marking and labelling of AI-generated content was under development.
The AI Act follows a staggered implementation schedule. Although the regulation entered into force on 1 August 2024, different provisions become applicable at different dates to give organizations time to prepare.
| Date | Provisions Taking Effect |
|---|---|
| 1 August 2024 | Entry into force of the regulation |
| 2 February 2025 | Prohibited AI practices (Article 5) become enforceable; AI literacy obligations (Article 4) apply to all providers and deployers |
| 2 August 2025 | Rules on GPAI models (Chapter V) become applicable; governance provisions take effect; the penalty regime comes into force; obligations related to notified bodies apply |
| 2 August 2026 | Most remaining provisions become applicable, including requirements for high-risk AI systems listed in Annex III; transparency obligations for limited-risk systems; registration in the EU database |
| 2 August 2027 | Requirements for high-risk AI systems that are safety components of products regulated under existing EU harmonization legislation (Annex I) become applicable |
It is worth noting that in March 2026, a proposal to extend certain deadlines gained significant momentum. As part of the "Omnibus VII" simplification package, the European Commission proposed delaying the application dates for high-risk AI system rules. Under this proposal, stand-alone high-risk systems would face compliance requirements starting 2 December 2027, while high-risk systems embedded in products would have until 2 August 2028. On 13 March 2026, the Council adopted its negotiating position on this amendment, and trilogue negotiations with the Parliament are expected to follow [17].
The AI Act establishes a multi-layered enforcement architecture that operates at both the EU and Member State levels.
The European AI Office, established within the European Commission's Directorate-General for Communications Networks, Content and Technology (DG CNECT), holds direct enforcement authority over providers of GPAI models. Its powers include requesting documentation and information, conducting evaluations to assess compliance with GPAI obligations, investigating systemic risks, and requesting corrective measures. The AI Office also plays a coordinating role across Member States, developing guidelines, templates, and codes of practice to support consistent implementation [18].
Each Member State must designate at least one national competent authority responsible for overseeing the AI Act's application within its territory. These authorities include a market surveillance authority and a notifying authority (responsible for conformity assessment bodies). National authorities are the primary enforcement bodies for all provisions other than those relating to GPAI models. By August 2025, Member States were required to have designated these authorities, though as of early 2026, only eight of the 27 Member States had publicly established their single contact points [19].
The governance framework is further supported by several advisory and coordination bodies:
The Act establishes a tiered fine structure:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices (Article 5 violations) | Up to 35 million EUR or 7% of total worldwide annual turnover, whichever is higher |
| Non-compliance with high-risk AI system obligations and other major provisions | Up to 15 million EUR or 3% of total worldwide annual turnover, whichever is higher |
| Supplying incorrect, incomplete, or misleading information to authorities | Up to 7.5 million EUR or 1% of total worldwide annual turnover, whichever is higher |
| GPAI model provider violations | Up to 15 million EUR or 3% of total worldwide annual turnover, whichever is higher |
For small and medium-sized enterprises (SMEs), including startups, fines are capped at the lower of the percentage or the absolute amount in each tier, providing a degree of proportionality [20].
The EU AI Act is frequently discussed in the context of the "Brussels Effect," a term describing the phenomenon whereby EU regulations become de facto global standards because multinational corporations find it more efficient to adopt a single compliance framework rather than maintaining different practices for different markets. The GDPR's influence on global privacy law is the most cited precedent, and many observers expect the AI Act to follow a similar trajectory [21].
Several factors support this expectation. The regulation has extraterritorial reach: it applies not only to providers established in the EU but also to providers and deployers located outside the EU if the output of their AI system is used within the Union. Companies like Adobe and OpenAI have already integrated technical compliance measures, such as C2PA content provenance watermarking, into their products globally, not just for European users [22].
The Act has also influenced other jurisdictions' regulatory thinking. Brazil's AI regulation efforts, Canada's Artificial Intelligence and Data Act (AIDA), and various legislative proposals in the United States have drawn on the EU's risk-based classification framework. International bodies including the OECD, the G7 Hiroshima AI Process, and the Council of Europe's Framework Convention on AI have all engaged with concepts that parallel the EU approach [23].
However, the Brussels Effect has important limitations in the AI domain. The United States, under its "America's AI Action Plan" released in 2025, has pursued a largely deregulatory approach intended to maintain American leadership in AI development. With US private investment in AI exceeding $110 billion in 2024 (more than five times Europe's total), some analysts argue that the competitive dynamics of AI development differ fundamentally from the consumer product markets where the Brussels Effect has historically been strongest [24]. China, too, has developed its own distinct regulatory framework for AI, including sector-specific rules for algorithmic recommendation, deepfakes, and generative AI, that operates independently of the EU model.
The EU AI Act has generated substantial debate both within Europe and internationally. The criticisms come from multiple directions, reflecting the inherent tension between the regulation's twin goals of protecting fundamental rights and fostering innovation.
The most persistent criticism from the technology industry centers on the compliance burden. More than 110 European companies, including major firms such as Airbus, ASML, and Mistral AI, have called on the Commission to delay enforcement of high-risk system obligations, citing unclear rules and delayed supporting guidance. Startup founders and investors have published open letters warning that the regulation risks driving talent and investment away from Europe at a critical moment in AI development [25].
The compliance challenge is particularly acute for SMEs. Overlapping requirements across the AI Act, the GDPR, and sector-specific regulations create layers of regulatory complexity that smaller companies may lack the legal and technical capacity to navigate. While the Act includes some accommodations for SMEs (such as access to regulatory sandboxes and lower fine caps), critics argue that these measures are insufficient.
A practical problem has dogged the implementation process: the technical standards and guidelines needed to operationalize the Act's requirements have lagged behind the compliance deadlines. Harmonized standards developed by European standards bodies (CEN and CENELEC) are not expected to be finalized until April 2027, nearly a full year after the high-risk system obligations were originally scheduled to take effect in August 2026. The General-Purpose AI Code of Practice was published in July 2025, only weeks before the relevant obligations became applicable [26].
This timing gap has created significant legal uncertainty. Organizations must comply with requirements whose precise technical interpretation remains, in some cases, unclear.
Some legal scholars have questioned whether the Act's definitions are sufficiently precise. The boundary between high-risk and non-high-risk systems can be ambiguous in practice. The concept of "systemic risk" for GPAI models, while anchored to the 10^25 FLOPs threshold, has been criticized as both too rigid (because a compute threshold may not accurately capture a model's real-world risk profile) and too narrow (because it may miss risks arising from widely deployed models that fall below the threshold).
From the other side of the debate, civil society organizations and digital rights groups have argued that the Act does not go far enough in certain areas. The exceptions permitting real-time remote biometric identification for law enforcement purposes have been a particular point of contention. Critics including European Digital Rights (EDRi) and other advocacy groups argued that any use of real-time facial recognition in public spaces poses a fundamental threat to privacy and civil liberties, regardless of the safeguards attached [27].
Other civil society concerns include the limited scope of the emotion recognition ban (which applies only to workplaces and schools, leaving other contexts unregulated) and the reliance on provider self-assessment for most high-risk systems rather than mandatory third-party auditing.
A broader strategic debate concerns whether the EU's regulatory-first approach puts European companies at a competitive disadvantage relative to their American and Chinese counterparts. Proponents argue that clear rules create legal certainty and build public trust, which ultimately support adoption. Critics counter that compliance costs and regulatory uncertainty will slow European AI development at a time when speed matters enormously. The contrast between the EU's prescriptive framework and the US government's lighter-touch approach under the 2025 AI Action Plan has sharpened this debate [28].
The implementation of the EU AI Act has revealed several practical challenges that organizations face in achieving compliance.
A fundamental obstacle has been the timing mismatch between compliance deadlines and the availability of the technical standards needed to operationalize them. Harmonized standards developed by European standards bodies (CEN and CENELEC) are not expected to be finalized until April 2027, nearly a full year after the high-risk system obligations were originally scheduled to take effect in August 2026. The General-Purpose AI Code of Practice was published in July 2025, only weeks before the relevant obligations became applicable [26].
This standards gap has forced organizations to interpret broad legal requirements without detailed technical specifications. Companies have reported difficulty determining precisely what constitutes adequate technical documentation, what level of testing satisfies the risk management requirements, and how to demonstrate compliance with accuracy and robustness standards in the absence of agreed-upon benchmarks.
The financial burden of compliance varies significantly depending on the size and type of organization. Early estimates suggest that achieving full compliance for a single high-risk AI system could cost between 200,000 and 400,000 EUR for a mid-sized company, including legal analysis, technical documentation, testing infrastructure, and ongoing monitoring. For GPAI model providers with systemic risk obligations, costs are substantially higher, with frontier model evaluation and adversarial testing alone potentially running into millions of euros annually.
| Compliance area | Estimated cost range (EUR) | Primary cost drivers |
|---|---|---|
| Legal analysis and gap assessment | 50,000 - 150,000 | Classification of AI systems, identification of obligations |
| Technical documentation | 30,000 - 100,000 per system | Detailed system descriptions, training data documentation |
| Risk management system | 50,000 - 200,000 per system | Risk identification, testing, mitigation measures |
| Conformity assessment | 20,000 - 80,000 per system | Self-assessment or third-party audit |
| Ongoing monitoring and updates | 30,000 - 100,000 annually | Post-market surveillance, incident reporting |
| GPAI systemic risk evaluations | 500,000 - 5,000,000+ annually | Adversarial testing, external evaluations, cybersecurity |
More than 110 European companies, including major firms such as Airbus, ASML, and Mistral AI, formally called on the Commission to delay enforcement of high-risk system obligations, citing unclear rules and delayed supporting guidance. Startup founders and investors published open letters warning that the regulation risks driving talent and investment away from Europe at a critical moment in AI development [25].
The lobbying effort proved partially successful: the Omnibus VII simplification package, proposed in late 2025 and advancing through the legislative process in early 2026, includes substantial delays to the high-risk system requirements.
The Act includes several provisions intended to ease the burden on smaller organizations:
| Accommodation | Description |
|---|---|
| Regulatory sandboxes | Member States must establish at least one AI regulatory sandbox by the relevant deadline, providing controlled environments where companies can test AI systems under regulatory supervision |
| Reduced fines | Fine caps for SMEs and startups are set at the lower of the percentage or absolute amount in each tier |
| Simplified documentation | Some documentation requirements can be satisfied through simplified templates for smaller-scale systems |
| Extended timelines | The Omnibus VII proposal would extend sandbox establishment deadlines and broaden SME exemptions to include small mid-cap companies |
As of early 2026, the EU's enforcement machinery is beginning to come online. The AI Office has engaged with major technology platforms regarding potential AI Act obligations, signaling that the regulation is moving from the compliance preparation phase into active oversight.
Reports indicate that the AI Office has initiated preliminary investigations into several large technology companies' GPAI model practices, focusing on whether adequate technical documentation and training data summaries have been provided in accordance with the obligations that became applicable in August 2025. While no formal fines have been levied as of March 2026, the AI Office has issued multiple requests for information and documentation to GPAI model providers [22].
The first formal enforcement actions under the prohibited practices provisions (which took effect in February 2025) are expected to emerge in the second half of 2026. National market surveillance authorities in several Member States have begun conducting audits of AI systems deployed in high-risk domains, though the pace varies significantly across jurisdictions.
The AI Office has also clarified that it will pursue enforcement based on a risk-proportionate approach, prioritizing cases involving prohibited practices and GPAI models with systemic risk before turning to the broader universe of high-risk AI systems once those obligations take effect.
In late 2025, the European Commission proposed amending the AI Act as part of a broader "Omnibus VII" simplification package aimed at reducing the regulatory burden on European businesses while maintaining the substance of digital legislation. The proposal responds to persistent industry complaints about compliance complexity, the timing mismatch between obligations and available standards, and concerns about European competitiveness in AI.
On 13 March 2026, the Council of the European Union adopted its negotiating position on the Omnibus VII amendments. The Council's position includes several significant modifications to the AI Act's original timeline and scope [17].
| Change | Current requirement | Proposed amendment |
|---|---|---|
| Standalone high-risk AI systems (Annex III) | Compliance required by August 2, 2026 | Delayed to December 2, 2027 |
| High-risk AI in regulated products (Annex I) | Compliance required by August 2, 2027 | Delayed to August 2, 2028 |
| Regulatory sandbox establishment | Original earlier deadline | Extended to December 2027 |
| SME exemption scope | Limited to SMEs and startups | Extended to include small mid-cap companies |
| AI Office enforcement powers | As specified in original Act | Strengthened to reduce governance fragmentation |
| Non-consensual intimate imagery | Not specifically addressed | New ban on AI-generated non-consensual intimate imagery and CSAM added |
The European Parliament is expected to adopt its own negotiating position on the Omnibus VII proposals, after which trilogue negotiations between the Parliament, Council, and Commission will determine the final text. The AI Act's existing provisions remain fully operative while these amendments are being negotiated. Industry observers anticipate that the final amendments could be adopted by late 2026 or early 2027 [29].
The Omnibus VII proposal has generated mixed reactions. Industry groups have generally welcomed the timeline extensions, viewing them as a pragmatic acknowledgment that compliance infrastructure is not yet ready. Civil society organizations, including European Digital Rights (EDRi), have criticized the delays as weakening the regulation under industry pressure. Several Member State governments have expressed concern that repeated delays risk undermining the Act's credibility as a regulatory instrument.
The addition of a prohibition on AI-generated non-consensual intimate imagery and child sexual abuse material has been broadly supported across stakeholder groups, reflecting the urgency of the issue highlighted by the Grok deepfake controversy in early 2026.
The AI Act requires each Member State to designate national competent authorities responsible for overseeing implementation within their territory. As of early 2026, the national infrastructure remains a work in progress.
| Implementation metric | Status (March 2026) |
|---|---|
| Member States with publicly designated single contact points | 8 of 27 |
| Member States with established market surveillance authorities for AI | Approximately 12 of 27 |
| Member States with operational regulatory sandboxes | 4 (Spain, Netherlands, Denmark, Malta) |
| Member States with published national AI strategies updated for the AI Act | 15 of 27 |
The uneven pace of national implementation has raised concerns about regulatory fragmentation within the EU. Companies operating across multiple Member States may face inconsistent interpretations and enforcement priorities, at least until the AI Board and AI Office's coordinating functions mature [19] [30].
As of March 2026, the EU AI Act is roughly midway through its implementation timeline. The prohibited practices and AI literacy obligations have been in effect since February 2025. The GPAI model rules and governance provisions became applicable in August 2025, along with the penalty regime. The most significant upcoming milestone is August 2026, when the bulk of the high-risk AI system requirements and transparency obligations are scheduled to take effect.
However, the landscape is shifting. On 13 March 2026, the Council of the European Union adopted its negotiating position on the Commission's proposal to amend the AI Act as part of the Omnibus VII simplification package. This proposal would extend the application dates for high-risk system rules by approximately 16 months, would broaden certain SME exemptions to include small mid-cap companies, and would strengthen the AI Office's enforcement powers while reducing governance fragmentation. Trilogue negotiations with the European Parliament are expected to begin in the coming months [29].
The AI Office continues to develop guidelines on topics including high-risk classification, incident reporting, transparency requirements, and the practical application of the prohibited practices provisions. The national implementation infrastructure remains a work in progress; as of early 2026, fewer than a third of Member States had formally established their designated single contact points [30].
Meanwhile, the first enforcement actions have begun to materialize. Reports indicate that the AI Office has engaged with major technology platforms regarding potential AI Act obligations, signaling that the regulation is moving from the compliance preparation phase into active oversight.
The Codes of Practice process continues to evolve. The General-Purpose AI Code of Practice, finalized in July 2025, is now being applied by signatories including OpenAI, Anthropic, Google, and xAI. A separate Code of Practice on marking and labelling of AI-generated content remains under development [16].
The EU AI Act represents one of the most ambitious attempts by any government to regulate artificial intelligence comprehensively. Whether it succeeds in balancing innovation with protection, and whether it achieves the global influence its advocates hope for, will depend on how effectively its provisions are implemented, enforced, and adapted in the years ahead.
The EU AI Act exists within a broader global regulatory landscape. The following table provides a high-level comparison of how the EU approach differs from other major jurisdictions.
| Dimension | EU AI Act | US approach | China approach | UK approach |
|---|---|---|---|---|
| Legislative model | Comprehensive horizontal regulation | No federal AI law; executive orders and state laws | Sector-specific regulations | Pro-innovation; sector regulator guidance |
| Risk classification | Four-tier risk framework (unacceptable, high, limited, minimal) | No formal federal classification | Application-specific rules | Five cross-sector principles |
| GPAI/foundation model provisions | Specific obligations with systemic risk threshold (10^25 FLOPs) | No federal requirements; state laws (CA SB 53, NY RAISE Act) target frontier models | Algorithm registration and security assessments | AI Security Institute conducts voluntary evaluations |
| Enforcement body | European AI Office (GPAI); national authorities (other AI systems) | No single body; varies by agency and state | Cyberspace Administration of China and sector regulators | Existing sector regulators |
| Maximum penalty | 35 million EUR or 7% global turnover | Varies by state (California: up to $100,000 per violation for some provisions) | RMB 10 million under amended Cybersecurity Law | No specific AI penalty regime |
| Extraterritorial reach | Applies to non-EU providers if AI output is used within the EU | Limited; state laws vary | Applies to services available in China | Limited |
The EU's approach is uniquely comprehensive in scope, but its effectiveness depends on implementation quality and the ability of European institutions to enforce obligations against global technology companies whose primary operations are outside EU borders.
For multinational technology companies, the AI Act's extraterritorial reach means that compliance is required regardless of where the company is headquartered. Companies like OpenAI, Google, Meta, and Anthropic must comply with GPAI model obligations for any model whose outputs are used within the European Union.
This has practical implications for product design, documentation practices, and organizational governance. Several companies have adopted a strategy of building compliance measures into their global products rather than maintaining separate European versions, following the pattern established by the GDPR. Adobe and OpenAI have integrated C2PA content provenance watermarking into their products globally, not just for European users [22].
However, some companies are evaluating whether certain AI features or products should be restricted or delayed in the European market if compliance costs are disproportionate. The tension between global product development and regional regulatory compliance remains an active strategic challenge for the industry.