America's AI Action Plan, officially titled Winning the Race: America's AI Action Plan, is the Trump administration's national AI strategy released by the White House Office of Science and Technology Policy (OSTP) on July 23, 2025. The plan was the deliverable required by Executive Order 14179, "Removing Barriers to American Leadership in Artificial Intelligence," signed by President Donald Trump on January 23, 2025, which directed the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs to produce an AI strategy within 180 days. The 28-page document lays out more than 90 federal policy actions across three pillars: accelerating AI innovation, building American AI infrastructure, and leading in international AI diplomacy and security [1][2][3].
The plan was authored by OSTP Director Michael Kratsios, Special Advisor for AI and Crypto David Sacks, and Senior White House Policy Advisor for AI Sriram Krishnan, with input from National Security Advisor Marco Rubio and over 10,000 public comments submitted in response to a February 2025 Request for Information. Released alongside three accompanying executive orders (EO 14318, EO 14319, and EO 14320), the plan represents a sharp reversal from the Biden administration's prior approach under the now-revoked Executive Order 14110, replacing oversight and risk mitigation with deregulation, federal procurement steering, infrastructure acceleration, and aggressive AI export promotion. Vice President JD Vance and President Trump jointly unveiled the plan at a "Winning the AI Race" summit in Washington, D.C., where Trump declared the United States would treat AI as "a national mission" and an "existential race with China" [3][4][5].
Reception has split along familiar lines. Industry groups, particularly large AI labs like OpenAI, Anthropic, Google DeepMind, and Microsoft, broadly welcomed the focus on infrastructure permitting and federal compute access while raising selective concerns about export controls and the anti-DEI procurement language. The AFL-CIO and civil rights groups including the ACLU criticized the plan as "a gift to Big Tech" that strips workers and consumers of basic protections. International responses ranged from the European Union's quieter pushback through its parallel "AI Continent" plan to China's "Global AI Governance Action Plan," published just three days later on July 26, 2025, which positioned itself as a multilateral counterweight to the American export push [6][7][8].
The AI Action Plan exists because EO 14179 ordered it into existence. Trump signed the order on January 23, 2025, three days after revoking President Biden's Executive Order 14110. EO 14179 declared a new federal posture toward artificial intelligence: the United States would "sustain and enhance America's global AI dominance" for "human flourishing, economic competitiveness, and national security," and would "promote AI development free from ideological bias or social agendas" [1][9].
The order's most consequential operational provision was Section 4, which set the 180-day deadline. Three coordinators were named to draft the plan: the Assistant to the President for Science and Technology (filled by Michael Kratsios after Senate confirmation on March 25, 2025), the Special Advisor for AI and Crypto (David Sacks, named December 5, 2024), and the Assistant to the President for National Security Affairs (Mike Waltz initially, then Marco Rubio in an acting capacity beginning May 2025). The OMB Director, the Domestic Policy Council, and the Economic Policy Council were directed to coordinate. The deadline fell on July 22, 2025; the plan was published one day late, on July 23, alongside the announcement of three follow-on executive orders [2][9].
The Trump AI policy team is unusually concentrated. Three figures took the lead in drafting the plan, all of whom are publicly credited as authors:
Vice President JD Vance also became a public face of the administration's AI agenda, most notably at the Paris AI Action Summit in February 2025, where he previewed many of the plan's themes.
The AI Action Plan sits inside a broader rejection of the Biden-era approach. EO 14110, signed October 30, 2023, had used the Defense Production Act to compel reporting from developers of dual-use foundation models trained above 10^26 floating-point operations. It directed NIST to develop the AI Risk Management Framework, created the US AI Safety Institute, and required equity reviews and red-teaming for federally used AI systems. Trump's January 20, 2025 revocation called the Biden order "unpopular, inflationary, illegal, and radical," and the AI Action Plan was the affirmative substitute. Where Biden's order leaned on compelled disclosure, the plan leans on procurement leverage, NEPA streamlining, federal land use, and export promotion. Where Biden mandated equity audits, the plan directs OMB and NIST to strip references to "misinformation, Diversity, Equity, and Inclusion, and climate change" [4][13].
On February 6, 2025, OSTP published a Request for Information (RFI) in the Federal Register seeking public input on the development of the plan. The RFI was unusually short and open-ended, asking respondents to identify the highest-priority federal actions needed "to sustain and enhance America's AI dominance." The original comment deadline was March 15, 2025. OSTP, the National Science Foundation, and the Networking and Information Technology Research and Development (NITRD) Program coordinated the process [14][15].
When the comment window closed, NITRD reported 10,068 responses determined to be responsive to the RFI. The White House announced the figure on April 24, 2025, with OSTP staff publicly thanking submitters and signaling that the plan was being drafted to reflect the public response. In practice, the influence of the comments on the final plan was uneven: detailed technical recommendations from major AI labs and trade associations are visible in the final document, while many of the labor, civil rights, and academic submissions are not [15][16].
| Submitter | Position summary |
|---|---|
| OpenAI | Called federal AI use "unacceptably low"; proposed waiving compliance requirements for AI pilots; urged federal preemption of state AI laws via a voluntary framework; proposed government partnerships for national security models |
| Anthropic | Proposed adding 50 GW of AI-dedicated power capacity by 2027; called for streamlined transmission permitting; supported strong NIST and CAISI evaluation roles; advocated government-wide AI workflow audits |
| Supported infrastructure reform and federal AI R&D investment; flagged concern that broad export controls would harm allied markets | |
| Microsoft | Pushed for expansion of countries qualifying for Tier 1 status under the (then pending) AI diffusion rule |
| Center for Data Innovation | Supported open-weight models and federal procurement modernization |
| Center for Democracy & Technology | Warned against weakening civil rights protections; opposed broad preemption |
| Business Roundtable | Backed workforce training, federal AI adoption, and procurement reform |
| Center for Security and Emerging Technology (CSET) | Argued for strong export controls, talent retention, and evaluation infrastructure |
| Center for AI Safety | Cautioned against open-weighting models with frontier dangerous capabilities |
| AFL-CIO | Demanded labor representation in AI deployment decisions; opposed deregulation |
| Public Citizen | Opposed federal preemption; called for civil rights protections |
The full set of responses remains publicly accessible through NITRD. Coverage from outlets like Just Security and Platformer noted that submissions converged on three themes: lagging federal AI adoption, the need for energy and data center capacity, and federal preemption of state AI laws. They diverged sharply on safety, civil rights, and worker protections [15][16][17].
The plan is organized as three pillars, each containing several thematic categories with specific recommended policy actions. White House announcements and outside legal analyses cite "more than 90" actions; some implementation trackers count 103 distinct directives, depending on how broadly subitems are parsed [3][5][18]. The pillars are summarized below.
| Pillar | Theme | Lead agencies | Selected priorities |
|---|---|---|---|
| I. Accelerate AI Innovation | Deregulation, open source, federal AI use | OSTP, OMB, NIST, NSF, FTC | Remove regulatory barriers; revise NIST AI RMF; promote open-weight models; enable federal AI adoption; "unbiased AI" procurement |
| II. Build American AI Infrastructure | Data centers, energy, semiconductors, cybersecurity | DOE, DOI, DOC, DOD, DHS | NEPA categorical exclusions; FAST-41 expansion; federal lands for compute; CHIPS Act streamlining; secure-by-design AI standards |
| III. Lead in International AI Diplomacy and Security | Exports, alliances, controls | DOC, State, Treasury, Defense | American AI Exports Program; "AI Alliance" of partner countries; export controls on China; counter PRC influence in international standards |
Three cross-cutting priorities are layered over the pillars: protecting and promoting American workers, ensuring AI systems are "trustworthy and free from ideological bias," and safeguarding AI from misuse, theft, and adversarial exploitation [3][5].
The first pillar is the most explicitly deregulatory. It is also the longest and contains the bulk of the procurement, R&D, and workforce provisions.
Every federal agency is directed to identify and propose for revision or repeal "regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development or deployment." OMB is to publish guidelines for this review. The Federal Trade Commission is directed to review and consider terminating consent decrees and ongoing investigations "that unduly burden AI innovation," a clear contrast with the active AI enforcement posture of the Lina Khan FTC [3][5].
The plan also instructs OMB to identify federal funding programs whose conditions are inconsistent with the new AI policy, with a view to limiting funds flowing to states with what the administration considers "burdensome" AI regulations. This carrot-and-stick approach foreshadows the December 11, 2025 preemption executive order [3][13].
NIST is directed to revise the AI Risk Management Framework to remove references to "misinformation, Diversity, Equity, and Inclusion, and climate change." The framework, originally released in January 2023, had become a widely-cited industry reference. The plan stops short of requiring private adoption but uses federal procurement and grant conditions to push commercial AI products toward the revised framework. Through 2025 and into 2026, NIST issued updated profiles, including the AI Cyber Profile and a critical-infrastructure trustworthiness profile released in concept-note form on April 7, 2026 [18][19].
The plan endorses open-source and open-weight AI as "crucial to American AI dominance." It directs the National Science Foundation to expand access to compute resources for academic researchers, startups, and government users, and calls for an active federal posture in supporting US-developed open-source AI standards globally. This was a particular victory for AI labs like Meta and Mistral, and was also broadly supported by Anthropic and OpenAI in their RFI submissions, though OpenAI's commercial strategy is less open-weight focused [3][17].
The plan also provides for federal funding to support "frontier evaluation" of open-weight models, including evaluations of models with safety guardrails removed for adversarial testing, a workstream that was later operationalized through the Center for AI Standards and Innovation (CAISI) testing agreements signed with Google DeepMind, Microsoft, and xAI in May 2026 [20].
A significant cluster of recommendations targets federal use of AI. The plan formalizes the Chief Artificial Intelligence Officer Council (CAIOC) as the primary venue for interagency coordination on federal AI adoption. This is notable because it preserves a structure that was originally created under Biden's EO 14110 even as the rest of Biden's framework was dismantled. The plan also creates a federal talent-exchange program to detail AI specialists across agencies, directs the General Services Administration to streamline AI procurement, and instructs the Department of Defense to integrate AI into operational workflows including planning, logistics, and intelligence [3][5].
At the Department of Defense, specific directives include refining the Responsible AI Strategy and Generative AI Roadmap, building high-security data centers for intelligence community use, and cooperating with NIST on technical standards. The plan does not assign specific funding for these tasks; the Pentagon's existing AI budgets are expected to absorb the work [3][18].
The plan directs the Department of Education and the Department of Labor to expand AI-related apprenticeship pathways, support career and technical education programs that include AI skills, and authorize tax-free employer reimbursement for AI training (the latter requires legislation that has not yet passed). It calls for the establishment of an AI Workforce Research Hub to study labor-market impacts and tasks the Bureau of Labor Statistics with new AI occupational data collection. These workforce provisions were operationalized in part through Executive Order 14277 ("Advancing Artificial Intelligence Education for American Youth," signed April 23, 2025), which created the White House Task Force on AI Education chaired by the OSTP Director [3][21].
One of the more contentious provisions: the federal government is to procure only large language models that comply with two "Unbiased AI Principles":
These principles were operationalized through Executive Order 14319 on the same day the plan was released. OMB was directed to issue implementation guidance within 120 days, which it did in late 2025 with a memorandum specifying contract clauses, vendor self-certification requirements, and waiver procedures. The provision generated substantial debate over whether it amounts to government compelled speech and whether commercial models like Claude Opus 4.7 and GPT-5 can plausibly be tuned to satisfy both criteria simultaneously [3][22][23].
The second pillar is where the plan's most concrete operational requirements live. It treats AI as a power-and-physics problem first and a policy problem second.
The plan directs the creation of new categorical exclusions under the National Environmental Policy Act for AI-related data centers and supporting infrastructure, dramatically shortening environmental review for qualifying projects. It expands the FAST-41 process for federal infrastructure permitting to cover AI projects, and instructs the Department of the Interior, the Department of Energy, and the Department of Defense to identify federal sites suitable for compute facilities. EO 14318, signed the same day, defines a "Data Center Project" as a facility requiring more than 100 megawatts of new load dedicated to AI and creates the legal framework for the new exclusions [3][24][25].
DOE moved fastest. On July 24, 2025, the day after the plan's release, the Department announced four initial federal sites for data center development: Idaho National Laboratory, the Oak Ridge Reservation in Tennessee, the Paducah Gaseous Diffusion Plant in Kentucky, and the Savannah River Site in South Carolina. In October 2025, the US Air Force solicited proposals to develop "underutilized" lands at Arnold AFB, Edwards AFB, Joint Base McGuire-Dix-Lakehurst, Davis-Monthan AFB, and Robins AFB for data center use [25][26].
The plan acknowledges that "American energy capacity has stagnated since the 1970s while China rapidly built out their grid" and calls for grid expansion sufficient to power the projected AI buildout. It encourages "frontier energy sources": enhanced geothermal, advanced nuclear fission, and nuclear fusion. It directs DOE to coordinate with utilities and the Federal Energy Regulatory Commission on transmission siting, and to establish technical assistance programs for state-level permitting reform. The plan also incorporates the prior Executive Order 14156 on declaring a National Energy Emergency, which Trump signed on inauguration day and renewed for another year on January 14, 2026 [3][27].
Despite Trump campaign criticism of the CHIPS and Science Act, the plan does not call for repeal. It directs the Department of Commerce to remove "extraneous policy requirements" from CHIPS Act funding, including diversity-related conditions Biden had attached to grants, while preserving the underlying domestic semiconductor manufacturing program. The plan calls for accelerated funding for advanced packaging, semiconductor equipment, and emerging technologies including photonic and neuromorphic chips [3][18].
The plan instructs the Cybersecurity and Infrastructure Security Agency (CISA) to integrate AI threats into the Cybersecurity Performance Goals, and directs NIST to develop voluntary AI-specific cybersecurity benchmarks including red-teaming guidance and incident response protocols. It calls for an AI Information Sharing and Analysis Center (AI-ISAC) to be established under DHS leadership for cross-sector threat sharing, and it instructs the Department of Defense to develop AI-tailored incident response protocols and "secure-by-design" technical standards for high-security data centers [3][5][24].
The third pillar mobilizes the State Department, Commerce, and Treasury on AI exports, alliances, and controls.
The plan directs the Secretary of Commerce to establish the American AI Exports Program, soliciting industry-submitted "full-stack AI technology packages": hardware, models, software, applications, and standards. The program was given a launch deadline of October 21, 2025. Selected packages are designated "priority AI export packages" eligible for federal financing through the Export-Import Bank, the US International Development Finance Corporation, and similar agencies. The legal foundation is Executive Order 14320, signed on the same day as the plan [3][28][29].
The administration has framed the program as the centerpiece of an "AI Alliance": a coalition of countries committed to the US technology stack, framed in opposition to Chinese AI exports and the open-source DeepSeek lineage that emerged in early 2025. Initial reporting suggests the program has attracted submissions from major US hyperscalers and chip vendors and that the State Department has begun bilateral discussions with the Gulf states, India, Japan, South Korea, and several European countries on adoption commitments [29][30].
The plan calls for tightening export controls on advanced compute and semiconductor manufacturing equipment, particularly with respect to China, while warning against overly broad controls that would push allied countries toward Chinese alternatives. It directs the Department of Commerce to use "location verification" and other tools to prevent diversion of chips through third countries. In May 2025, the Trump administration rescinded the Biden-era "AI diffusion rule," the tiered chip export framework that would have classified countries into three categories with different license requirements, on the basis that it was insufficiently surgical. New, narrower export rules have rolled out incrementally through 2025 and 2026 [3][31].
The plan directs the United States to engage "more actively" in international AI governance bodies including the International Organization for Standardization and the OECD, with a goal of countering Chinese influence in AI standardization. It does not embrace the AI Safety Summit series originating with the Bletchley Declaration of November 2023 or the Seoul Declaration of May 2024. Instead it endorses bilateral and minilateral engagement, particularly through the Quad and the G7, and through CAISI's bilateral relationship with the UK AI Security Institute (the renamed AI Safety Institute) [3][32].
On July 23, 2025, Trump signed three executive orders alongside the plan. All three were published in the Federal Register on July 28, 2025. Their numbering and titles are listed below.
| EO Number | Title | Date Signed | Federal Register | Core Effect |
|---|---|---|---|---|
| 14318 | Accelerating Federal Permitting of Data Center Infrastructure | July 23, 2025 | 90 FR 35379 | Defines AI Data Center Project; new NEPA categorical exclusions; FAST-41 expansion; federal land use; financing |
| 14319 | Preventing Woke AI in the Federal Government | July 23, 2025 | 90 FR 35385 | Establishes "Unbiased AI Principles" for federal LLM procurement; OMB implementation guidance within 120 days |
| 14320 | Promoting the Export of the American AI Technology Stack | July 23, 2025 | 90 FR 35393 | Establishes American AI Exports Program; designates priority packages; aligns federal financing |
A fourth executive order with strong AI relevance, signed earlier in 2025, also forms part of the operative framework: EO 14277, "Advancing Artificial Intelligence Education for American Youth" (April 23, 2025), which created the White House Task Force on AI Education chaired by the OSTP Director [21].
A fifth, signed later, extended the framework to state preemption: the December 11, 2025 executive order, "Ensuring a National Policy Framework for Artificial Intelligence" (also referred to as "Eliminating State Law Obstruction of National Artificial Intelligence Policy"), which created the DOJ AI Litigation Task Force, directed FTC preemption analysis, and conditioned BEAD broadband funding on states avoiding "onerous" AI laws. The December order is structurally a follow-up to the AI Action Plan's directive to use federal funding leverage against state AI regulation [13][33].
The plan distributes more than 90 actions across the executive branch. Outside trackers, including the Center for Security and Emerging Technology and the Conference Board, have published detailed implementation matrices. The most extensive mandates fall on Commerce (especially NIST and CAISI) and the Department of Defense. The table below summarizes major agency deliverables and known status as of mid-2026.
| Agency | Selected deliverables | Status |
|---|---|---|
| OSTP | Coordinate plan implementation; chair AI Education Task Force; coordinate with NSC | On track |
| DOC / NIST / CAISI | Revise AI RMF; lead AI Exports Program; develop AI Agent Standards Initiative; pre-deployment frontier model testing | RMF revision underway; Exports Program launched October 21, 2025; AI Agent Standards Initiative announced February 2026; CAISI testing agreements with Google DeepMind, Microsoft, xAI signed May 2026 |
| DOE | Identify federal sites for data centers; release AI Strategy and Compliance Plan; accelerate frontier energy permitting | Four sites identified July 24, 2025; AI Strategy and Compliance Plan released October 2, 2025 |
| DOD | Refine Responsible AI Strategy and Generative AI Roadmap; build high-security data centers; collaborate with NIST | Updated guidance released through 2025; high-security data center program ongoing |
| DHS / CISA | Establish AI-ISAC; integrate AI into Cybersecurity Performance Goals; AI incident response | AI-ISAC announced 2025; Cybersecurity Performance Goals updated |
| DOI | Authorize data center construction on federal lands | Initial sites authorized 2025 |
| NSF | Expand compute access for academic researchers; AI education research; AI Workforce Research Hub | Compute expansion underway; some NSF programs cut concurrently due to broader budget reductions, drawing criticism |
| Department of Education / Department of Labor | AI literacy curriculum; AI apprenticeships | Implemented through EO 14277 |
| OMB | Federal procurement guidance within 120 days; revise M-24-10 and M-24-18 | Revisions issued in late 2025 |
| FTC | Issue policy statement on FTC Act preemption (per December 11, 2025 EO) | Statement deadline March 2026 |
| USPTO | Issue guidance on AI inventorship and copyright | Guidance under development |
| GSA | First three AI Prioritization FedRAMP 20x Low authorizations | On track for January 2026 completion |
| Air Force | Develop "underutilized" lands at five bases for data centers | Solicitation issued October 2025 |
Reporting in July and August 2025 noted several items present in earlier drafts that were absent or substantially weakened in the final document. According to coverage in Politico, Axios, and Lawfare, the most significant changes were:
It is worth noting that some of these omissions reflect deliberate policy choices by the drafters rather than draft erosion. Both Sacks and Krishnan have argued publicly that bias mandates and disclosure regimes would slow innovation and invite legal challenges; the plan's final shape is consistent with their stated views.
Industry response was generally favorable on infrastructure and procurement, mixed on procurement bias rules, and concerned about export controls. The Frontier Model Forum, an industry consortium of Anthropic, Google DeepMind, Microsoft, and OpenAI, released a measured statement welcoming infrastructure permitting reform and federal compute investment. Trade groups including the Information Technology Industry Council (ITI), the Business Roundtable, and the Chamber of Commerce praised the plan's deregulatory approach. The Semiconductor Industry Association applauded the export and CHIPS Act provisions [5][20][35].
Individual lab responses tracked with their RFI submissions:
The AFL-CIO issued a statement on July 23 titled "Trump AI Action Plan Is 'A Gift to Big Tech.'" President Liz Shuler called the plan a vehicle to "flood U.S. markets with untested, unchecked artificial intelligence (AI) that threatens good jobs and workers' civil rights" and warned that conditioning federal funding on state AI deregulation would "strip workers of basic protections." In October 2025, the AFL-CIO launched the "Workers First Initiative on AI," a counter-blueprint for state and federal policy emphasizing collective bargaining, worker representation in AI deployment decisions, and disclosure of workplace AI use [6][36].
Academic labor groups, including the American Association of University Professors, joined civil society organizations in highlighting the absence of workplace discrimination provisions and the gutting of EEOC enforcement under the broader administration. FedScoop coverage of a labor-coalition letter to the White House noted that the plan "contains a good bit of pro-labor messaging" but "was light on details" [37].
Civil rights organizations have been broadly critical. The American Civil Liberties Union (ACLU), the Lawyers' Committee for Civil Rights Under Law, the Center for Democracy & Technology, the Leadership Conference on Civil and Human Rights, and Public Citizen all issued statements opposing core provisions of the plan, particularly the federal preemption pressure and the absence of algorithmic-discrimination protections [7][38][39].
The ACLU's senior policy counsel Cody Venzke called the procurement bias rules "government censorship masquerading as neutrality" and argued that "President Trump's attempt to restrict state AI regulations is not only harmful, it raises serious legal questions as the president is acting beyond any statute passed by Congress." The Center for AI and Digital Policy filed a complaint with OSTP arguing that the RFI process was inadequate and that key submissions on civil liberties were ignored. Public Citizen published a legal analysis arguing that the December 2025 follow-on preemption order rests on an unconstitutional theory of federal power [7][39][40].
Academic reception was more nuanced. Stanford HAI called the plan "an ambitious blueprint that defines AI as an existential national priority" and noted positive elements (open-source support, NSF compute, frontier evaluation) alongside concerns about safety implementation, civil liberties, and academic research funding. The Atlantic Council called the plan "a deliberative and thorough plan," citing its concrete infrastructure deliverables and "AI Alliance" framework. Brookings published a series of takes, including a critical piece by Mark MacCarthy noting that the plan undermines its own goals by "gutting the National Science Foundation through grant cancellations and staff terminations" while simultaneously asking NSF to expand AI compute access. The Center for Security and Emerging Technology (CSET) at Georgetown released a tracker of plan deliverables and noted that approximately one-third of the actions do not identify a lead agency, and no implementation timelines are given for many tasks [41][42][43][44].
The Center for AI Safety newsletter took a more sanguine view on safety, noting that while the plan is not safety-focused, it does not actively encourage release of open-weight models with frontier dangerous capabilities, and includes positive provisions like a DARPA project on interpretability and AI control, plus chemical, biological, radiological, nuclear, and explosives (CBRNE) risk research [45].
International reactions were dominated by the simultaneous release of competing strategies.
Nearly two years on from EO 14179 and ten months from the plan's release, implementation is partial and uneven.
Shipped and operational:
In progress or partial:
Slipped or unfunded:
The plan does not stand alone. It sits at the center of an interconnected federal AI framework:
The plan also affects parallel international institutions. The Bletchley Declaration and Seoul Declaration frameworks for international AI safety cooperation were de-emphasized by JD Vance's February 2025 Paris speech, and the plan's international pillar largely bypasses the Summit framework in favor of bilateral and minilateral engagement. Voluntary commitments through the Frontier Model Forum and lab-specific responsible scaling policies continue to operate alongside the federal framework, with US labs in some cases aligning more closely with the EU AI Act's transparency requirements than with the plan's procurement standards [4][13][32].
The gap between the plan and the Biden-era framework can be summarized as follows.
| Dimension | EO 14110 (Biden, October 2023, revoked January 2025) | AI Action Plan and EO 14179 framework |
|---|---|---|
| Core posture | Risk mitigation, oversight, equity | Innovation, dominance, deregulation |
| Authority used | Defense Production Act for compelled reporting | Procurement, NEPA, federal lands, exports |
| Industry reporting | Mandatory above 10^26 FLOPs | Voluntary CAISI testing |
| Federal procurement | Risk-based with equity reviews | Two "unbiased AI principles" |
| Lead institutions | US AI Safety Institute, CAIOs, AI Bill of Rights | CAISI, Chief AI Officer Council, AI Litigation Task Force |
| State law posture | Permissive | Active preemption push |
| Energy and infrastructure | Limited focus | Central pillar; energy emergency declaration; NEPA exclusions |
| Watermarking and content authentication | Required guidance | Deprioritized |
| Immigration | Streamline AI talent visas | Rolled back |
| International coordination | Bletchley/Seoul summit series, UK AISI partnership | Paris communique declined; bilateral CAISI-AISI cooperation continues |
| Number of directives | More than 100 specific actions | More than 90 federal policy actions |
| Legal durability | Revoked by next administration | Same risk; durability depends on continued executive support |