The Executive Order on AI most commonly refers to Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," signed by US President Joe Biden on October 30, 2023. It was the most comprehensive piece of artificial intelligence governance issued by the United States government, directing over 50 federal agencies to undertake more than 100 specific actions related to AI safety, security, equity, innovation, and international cooperation. The order was revoked by President Donald Trump on January 20, 2025, and replaced three days later by Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," which shifted the federal government's AI posture from risk mitigation and oversight toward deregulation and innovation promotion [1][2].
By mid-2023, the rapid proliferation of large language models such as GPT-4, Claude, and Gemini had created intense pressure on governments worldwide to respond to both the opportunities and risks posed by advanced AI. In the United States, there was no comprehensive federal AI legislation. The Biden administration had taken incremental steps, including the Blueprint for an AI Bill of Rights (October 2022) and voluntary commitments from 15 leading AI companies in July 2023 to manage AI risks. However, these measures were widely seen as insufficient given the pace of technological change [3].
Internationally, the European Union was advancing the EU AI Act, which would become the world's first comprehensive AI law upon its formal adoption in 2024. China had already implemented binding regulations on generative AI services, effective August 2023. Against this backdrop, the Biden administration sought to demonstrate American leadership on AI governance without waiting for Congress to pass legislation [4].
The executive order leveraged presidential authority, including powers under the Defense Production Act (DPA), to impose requirements on AI developers and direct federal agencies to take action. The use of the DPA was significant because it gave the order legal teeth beyond what a typical executive directive could achieve: under DPA Title VII, the government could compel private companies to provide information about their AI development activities [5].
The order was organized around eight guiding principles and covered an extraordinarily broad range of AI-related policy areas. Its most consequential provisions fell into several categories.
The order defined "dual-use foundation models" as AI models that exhibit, or could be easily modified to exhibit, high levels of performance at tasks posing serious risks to security, public health, or safety. It required developers of such models to report to the federal government on an ongoing basis, sharing information about the development process, cybersecurity measures, ownership of model weights, and results of safety testing [1].
The reporting threshold was set at models trained using more than 10^26 integer or floating-point operations (FLOPs). A Biden administration official stated at the time that the threshold was calibrated so that "current models wouldn't be captured but the next generation state-of-the-art models likely would." For context, GPT-3's 175 billion parameter version required roughly 318 times less computing power than this threshold. The threshold was intended to be updated by the Secretary of Commerce as the state of the art advanced [5].
The order also required companies possessing large-scale computing clusters (defined as having a theoretical maximum computing capacity of 10^20 FLOPs per second for AI training) to report the location and total computing power of those clusters to the federal government [5].
Developers of the most powerful AI systems were required to share the results of safety testing and red-teaming exercises with the federal government. Red-teaming, the practice of deliberately testing AI systems for vulnerabilities and harmful capabilities, was to be conducted according to standards developed by NIST (the National Institute of Standards and Technology). This provision aimed to ensure that the government had visibility into the safety profiles of frontier models before they were widely deployed [1].
The order directed the creation of the US AI Safety Institute within NIST, tasked with developing standards, guidelines, and best practices for AI safety and security. The institute was charged with several responsibilities:
| Responsibility | Description |
|---|---|
| Safety standards | Develop guidelines for safe, secure, and trustworthy AI systems |
| Evaluation frameworks | Create benchmarks and testing methodologies for assessing AI model capabilities and risks |
| Red-team guidance | Establish standards for red-teaming and adversarial testing of AI systems |
| Content authentication | Develop guidance on watermarking and content provenance for AI-generated content |
| International cooperation | Collaborate with international counterparts, including the UK AI Safety Institute |
The AI Safety Institute quickly became one of the most prominent outcomes of the executive order, building evaluation teams and conducting joint assessments with its UK counterpart [6].
The order directed the Department of Commerce to develop guidance for content authentication and watermarking to clearly label AI-generated content. Watermarking was defined in the order as "the act of embedding information, which is typically difficult to remove, into outputs created by AI, including into outputs such as photos, videos, audio clips, or text, for the purposes of verifying the authenticity of the output or the identity or characteristics of its provenance, modifications, or conveyance" [1].
Recognizing that the United States depended heavily on foreign-born researchers and engineers for its AI workforce, the order directed the State Department and Department of Homeland Security to streamline visa processes for AI experts. Specific measures included modernizing visa criteria for highly skilled AI professionals, clarifying pathways for researchers and entrepreneurs, and reducing processing times. The order also directed agencies to consider how immigration policy could help the US attract and retain global AI talent [1].
The order required each federal agency to designate a Chief Artificial Intelligence Officer (CAIO) responsible for overseeing the agency's use of AI. Agencies were directed to take a risk-based approach to deploying generative AI rather than imposing blanket bans. They were also required to complete inventories of their AI use cases and ensure that AI systems used in government did not discriminate against protected groups [1].
The order also addressed a wide range of other topics:
| Area | Key Requirements |
|---|---|
| Civil rights | Guidance to prevent algorithmic discrimination in hiring, lending, criminal justice, and other domains |
| Workers | Studies on AI's impact on the labor market; guidance on employer use of AI for surveillance and evaluation |
| Healthcare | Development of AI safety standards for healthcare applications |
| Education | Recommendations for using AI in educational settings |
| Privacy | Research into privacy-preserving techniques; assessment of how AI changes the privacy landscape |
| Competition | Actions to promote competition in AI markets and prevent monopolistic concentration |
| National security | Classified companion memo on AI and national security |
Between its signing in October 2023 and its revocation in January 2025, Executive Order 14110 produced substantial implementation activity. NIST published its AI Risk Management Framework Generative AI Profile in July 2024. The Department of Commerce established initial reporting requirements for computing clusters. Multiple agencies appointed Chief AI Officers. The AI Safety Institute began operations, hiring staff and conducting evaluations of frontier models, including a joint evaluation of OpenAI's o1 model with the UK AI Safety Institute [6][7].
However, many of the order's mandates had aggressive timelines (90 days, 180 days, 270 days) that proved challenging for federal agencies to meet. Some deadlines were missed, and the breadth of the order meant that implementation was uneven across agencies [7].
On January 20, 2025, within hours of taking office for his second term, President Donald Trump revoked Executive Order 14110 as part of a broader package of executive actions reversing Biden-era policies. Three days later, on January 23, 2025, Trump signed Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence" [2].
The Trump order stated that it aimed to "strengthen United States leadership in artificial intelligence," promote AI development "free from ideological bias or social agendas," and establish an action plan to maintain global AI dominance. The order reflected the incoming administration's view that Biden's AI governance approach was overly regulatory and risked hampering American competitiveness, particularly relative to China [2].
Executive Order 14179 directed three senior officials (the Assistant to the President for Science and Technology, the Special Adviser for AI and Crypto, and the Assistant to the President for National Security Affairs) to review all actions taken under the Biden AI executive order. Within 180 days, they were to assess which actions were "inconsistent with, or present obstacles to" the new administration's pro-innovation AI policy and recommend their revision or rescission [2].
The order also called for the development of an "AI Action Plan" that would articulate the administration's priorities for maintaining US AI dominance. The plan was to be developed by the Office of Science and Technology Policy (OSTP) in coordination with relevant agencies [2].
The revocation eliminated several key elements of the Biden order:
| Revoked Element | Consequence |
|---|---|
| Dual-use foundation model reporting | Companies no longer required to share safety testing results or development information with the government |
| Computing cluster reporting | Reporting requirements for large-scale computing infrastructure eliminated |
| Red-teaming mandates | Government-required red-teaming standards no longer in effect |
| Chief AI Officer requirement | Federal agencies no longer mandated to appoint CAIOs (though some retained them voluntarily) |
| Watermarking guidance | Department of Commerce efforts on content authentication were deprioritized |
| Immigration streamlining | AI talent visa initiatives were rolled back as part of the broader immigration policy shift |
Not everything from the Biden era was eliminated. The NIST AI Risk Management Framework, published in January 2023 (before Executive Order 14110), remained in place as a voluntary, non-binding guidance document. Many private companies continued to use it as a best-practice reference. Some agencies retained their Chief AI Officers and AI governance structures even without the federal mandate. And corporate safety commitments made at the AI Safety Summit series (the Seoul Frontier AI Safety Commitments, for instance) existed independently of any US executive order [8].
However, the US AI Safety Institute at NIST underwent significant changes. In June 2025, Secretary of Commerce Howard Lutnick announced that the institute would be renamed the Center for AI Standards and Innovation (CAISI), dropping "safety" from its title and reorienting its mission toward national security risks and global competitiveness. Staff layoffs affected many of the researchers hired under the Biden-era mandate. Lutnick stated: "For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards" [9].
On December 11, 2025, Trump signed an additional executive order titled "Ensuring a National Policy Framework for Artificial Intelligence." This order signaled an intent to consolidate AI policy at the federal level and preempt the growing patchwork of state-level AI regulations. The order reflected the administration's concern that diverse state laws (including new AI laws in California, Texas, Colorado, and New York) were creating compliance burdens for the AI industry [10].
The contrast between the Biden executive order's approach and the EU AI Act highlights fundamental differences in how the two jurisdictions approached AI governance.
| Dimension | Biden Executive Order 14110 | EU AI Act |
|---|---|---|
| Legal authority | Executive action (revocable by next president) | Binding legislation passed by European Parliament and Council |
| Durability | Revoked after 15 months | Permanent law with phased implementation through 2026 |
| Enforcement mechanism | DPA reporting requirements; agency directives | Fines up to 35 million EUR or 7% of global revenue |
| Risk classification | Focused on dual-use foundation models above 10^26 FLOPs | Four-tier risk system: unacceptable, high, limited, minimal |
| Scope | Broad (safety, equity, labor, privacy, immigration, national security) | Focused on AI system lifecycle (development, deployment, use) |
| Transparency requirements | Developer reporting to government | Mandatory transparency for users interacting with AI; public documentation |
| Approach to innovation | Balanced safety with innovation promotion | Risk-based regulation with exemptions for research and open-source |
| International influence | Influential but short-lived | Widely seen as a regulatory model for other jurisdictions |
The EU AI Act's risk-based classification system categorizes AI applications into four tiers, with different regulatory requirements for each. Systems posing "unacceptable risk" (such as social scoring and certain biometric surveillance applications) are prohibited outright. High-risk systems (including AI used in critical infrastructure, education, employment, and law enforcement) face stringent requirements including conformity assessments, documentation, and human oversight. General-purpose AI models, including large language models, face additional transparency and safety requirements, particularly those exceeding 10^25 FLOPs in training compute [4].
The Biden executive order, by contrast, was primarily focused on the most capable frontier models and did not establish a comprehensive classification system for all AI applications. Its strength lay in its breadth (covering immigration, labor, education, and national security alongside pure safety concerns) and in the reporting requirements enabled by the Defense Production Act. Its weakness was its impermanence: as an executive order, it could be (and was) revoked by the next president [4].
Despite its relatively brief effective period, Executive Order 14110 had lasting effects on the AI industry.
The order's emphasis on red-teaming, safety testing, and reporting helped normalize these practices across the industry. By the time the order was revoked, most major AI labs had established formal safety testing programs, published safety frameworks, and begun internal governance structures. While these were no longer federally mandated after January 2025, market pressure, customer expectations, and international requirements (particularly from the EU AI Act) kept many companies on a similar trajectory [8].
The executive order served as a reference point for other countries developing their own AI governance approaches. Its definitions of "dual-use foundation model" and its compute-based threshold for regulatory attention influenced policy discussions worldwide. Even after its revocation, international policy documents continued to reference its frameworks [7].
The cycle of enactment and revocation created significant uncertainty for the AI industry. Companies that had invested in compliance infrastructure for the Biden order found those investments stranded when it was revoked. At the same time, the Trump administration's deregulatory stance was itself unstable, as companies recognized that a future administration could reimpose requirements. This uncertainty complicated long-term planning for AI safety investments [10].
The revocation of the federal order accelerated state-level AI regulation. In the absence of comprehensive federal legislation, states including California, Texas, Colorado, and New York passed their own AI laws in 2025. California's Transparency in Frontier Artificial Intelligence Act and Texas's Responsible Artificial Intelligence Governance Act both took effect on January 1, 2026. This proliferation of state laws created the very regulatory fragmentation that both industry and the Trump administration sought to avoid, ultimately prompting the December 2025 executive order on federal AI policy preemption [10].
As of early 2026, the United States lacks comprehensive federal AI legislation. The policy landscape is characterized by the Trump administration's pro-innovation, deregulatory stance at the federal level, coupled with an expanding patchwork of state AI laws. The NIST AI Safety Institute has been renamed the Center for AI Standards and Innovation (CAISI) and reoriented toward national security and competitiveness rather than broad AI safety. The NIST AI Risk Management Framework continues to serve as a voluntary industry reference [9].
Congress has introduced various AI-related bills but has not passed comprehensive legislation. The AI industry operates under a mix of voluntary commitments (such as the Seoul Frontier AI Safety Commitments), state laws, and international obligations (particularly for companies operating in the EU). The gap between the regulatory approaches of the US and EU has widened, with the EU AI Act's requirements taking effect in phases through 2026 while the US federal government has moved in the opposite direction [10].
The legacy of Executive Order 14110 persists in the safety practices, governance structures, and institutional frameworks it helped establish, even though the order itself is no longer in effect. Whether its approach to AI governance will be revived by a future administration or superseded by congressional legislation remains an open question.