AI Action Plan

Last updated
AI Action Plan
Winning the Race: America's AI Action Plan
Seal of the President of the United States.svg
President Donald Trump
SignedJuly 23, 2025 (2025-07-23)
Summary
Sets out more than 90 federal policy actions across three pillars to maintain U.S. global leadership in artificial intelligence

"Winning the Race: America's AI Action Plan" is a policy blueprint published by the White House on July 23, 2025, setting out more than 90 federal policy actions intended to secure United States dominance in artificial intelligence. [1] The 28-page document was developed by the Office of Science and Technology Policy (OSTP), with Dean Ball, then serving as OSTP's Senior Policy Advisor for AI and Emerging Technology, as its primary staff drafter. [2] It was released at a summit titled "Winning the AI Race," hosted by the Hill and Valley Forum and the All-In podcast at the Andrew W. Mellon Auditorium in Washington, D.C. [3] President Donald Trump signed three accompanying executive orders at the event. [4]

Contents

The plan was widely described as a significant departure from the Biden administration's AI executive order, which Trump had revoked on his first day in office. [5] Where the Biden approach had emphasized safety, risk management, and equity, the Action Plan focused on deregulation, infrastructure expansion, international competition with China, and the removal of what the administration characterized as ideological bias from AI systems. [6]

Background

Executive Order 14179 and the RFI process

On January 23, 2025, Trump signed Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence," which directed the development of an AI Action Plan to "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security." [7] On February 6, 2025, the National Science Foundation's Networking and Information Technology Research and Development (NITRD) program, acting on behalf of OSTP, published a Request for Information (RFI) in the Federal Register, inviting comment from the public, academia, industry, and governments by March 15, 2025. [8]

OSTP published more than 10,000 public comments in April 2025. [9] Notable respondents included OpenAI, which proposed a five-part strategy encompassing regulatory preemption, export controls, copyright protections, infrastructure investment, and government adoption; [10] Google, which advocated for energy infrastructure reform, innovation-friendly international approaches, and continued federal research funding; [11] and Anthropic, which focused on national security testing of frontier models, strengthening export controls, enhancing the security of AI labs, and scaling energy infrastructure. [12]

Relationship to prior policy

The Action Plan represented the second Trump administration's replacement for the regulatory framework established by Executive Order 14110 (October 2023), which had directed federal agencies to develop safety standards, require reporting from developers of powerful AI systems, and address algorithmic discrimination. [13] Trump had revoked that order on January 20, 2025. During his first term, Trump had issued Executive Order 13859 (February 2019), "Maintaining American Leadership in Artificial Intelligence," which similarly prioritized U.S. competitiveness but with a narrower scope. [14]

Contents

The Action Plan is organized around three pillars, with cross-cutting themes of workforce empowerment, freedom from ideological bias in AI systems, and prevention of misuse by adversaries.

Pillar I: Accelerate AI Innovation

The first pillar contains policy recommendations aimed at ensuring the United States leads in both the development and application of AI systems. Key provisions include:

Pillar II: Build American AI Infrastructure

The second pillar addresses energy, data centers, semiconductors, cybersecurity, and workforce development for physical AI infrastructure.

Pillar III: Lead in International AI Diplomacy and Security

The third pillar covers export promotion, export controls, international governance, and national security evaluation of frontier models.

Accompanying executive orders

On the same day the Action Plan was released, Trump signed three executive orders to begin implementation: [4] [15]

  1. Promoting the Export of the American AI Technology Stack: Established the American AI Exports Program under the Department of Commerce, directing the development of full-stack AI export packages and mobilization of federal financing tools, with an implementation deadline of October 21, 2025. [16]
  2. Accelerating Federal Permitting of Data Center Infrastructure: Streamlined permitting for AI infrastructure projects on federal land and revoked the Biden administration's January 2025 Executive Order 14141 on AI infrastructure, which had required environmental reviews and alignment with clean energy goals. [17]
  3. Preventing Woke AI in the Federal Government: Established "unbiased AI principles" requiring that large language models procured by the federal government be "truthful" and "ideologically neutral," and directed OMB to issue implementing guidance by November 20, 2025. [16]

Subsequent developments

On December 11, 2025, Trump signed an additional executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which directed the Department of Justice to establish an AI Litigation Task Force to challenge state AI laws in court, instructed Commerce to identify state laws deemed "onerous" within 90 days, and threatened to withhold BEAD program funding from states with AI regulations the administration considered conflicting with federal policy. [18]

On March 20, 2026, the White House released a legislative framework outlining seven areas where it sought congressional action: child safety, community protections, intellectual property, free speech, innovation, workforce development, and federal preemption of state AI laws. The framework was developed by OSTP Director Michael Kratsios and White House Special Advisor David O. Sacks. [19]

Efforts to preempt state AI regulation through legislation faced repeated setbacks. A proposed 10-year moratorium on state-level AI regulation was included in the One Big Beautiful Bill Act but was stripped by the Senate in a 99–1 vote following opposition from a coalition of 40 state attorneys general and 260 state legislators. [20] A similar provision in the National Defense Authorization Act for fiscal year 2026 also failed. [18]

Reception

Industry

Major technology industry groups expressed support. TechNet called the plan a "policy framework [that] takes critical steps towards developing a strong domestic workforce, building critical AI infrastructure, launching public-private partnerships, removing regulatory barriers to innovation, strengthening the domestic AI stack, and enhancing U.S. global AI diplomacy." [5] Law firms including Skadden, White & Case, and Ropes & Gray published client advisories highlighting potential business opportunities in AI exports, data center permitting, and federal procurement, while also noting uncertainties around how "ideological bias" would be defined and enforced in practice. [6] [21] [22]

Policy analysts and think tanks

The Council on Foreign Relations published a multi-author assessment characterizing the plan as "a tale of two impulses within the administration." Contributors praised the inclusion of frontier model evaluations for national security risks but noted that it was unclear what would happen when evaluations found that a model had crossed a capability threshold, observing that there was "no shared understanding of what those mitigations should entail or who decides what qualifies as sufficient." Defense analysts highlighted the recommendation for an AI and Autonomous Systems Virtual Proving Ground but questioned whether the administration would follow through on resourcing. The CFR analysis also pointed to tensions between the goal of countering Chinese influence in international organizations and the administration's broader policies of withdrawing personnel and funding from multilateral institutions. [23]

The Brookings Institution published an extensive critique arguing that the plan gave "insufficient attention to accountability, ethics, and transparency," creating risks related to "unregulated AI systems, erosion of privacy, algorithm bias, polarization, misinformation, exploitative surveillance, unchecked corporate control over critical technologies, [and] unintended consequences on democratic governance." Brookings scholars also noted the tension between the plan's sweeping mandates for the National Science Foundation (e.g., leading new research labs, expanding NAIRR, developing testbeds) and the simultaneous defunding and destabilization of NSF under the administration, including the cancellation of more than 1,600 active grants. Separately, the Brookings analysis criticized the plan for lacking a domestic competition policy to prevent concentration among a small number of dominant AI firms. [24]

Brookings fellow Tom Wheeler argued that the "Preventing Woke AI" executive order amounted to "top-down censorship" and drew comparisons to content control practices in China, where AI outputs must align with the official ideology of the Chinese Communist Party. Wheeler and other commentators noted the vagueness of the term "ideological bias" and argued that the policy could burden smaller AI developers disproportionately, since large firms could absorb the compliance costs more easily. [25]

MIT Technology Review noted that, compared with Biden-era executive orders, the Action Plan was "mostly devoid of anything related to making AI safer," with the notable exception of the deepfake provisions. [26]

National security and biosecurity

The plan's provisions on frontier model evaluation and biosecurity drew substantive engagement from national security researchers and institutions. The law firm Steptoe observed that, despite avoiding the phrase "AI safety," the Action Plan included language and provisions that would be "familiar to experts from the AI safety world," including sections on interpretability, AI control, model evaluation, and CBRNE risks, and suggested that the administration was "indeed concerned by a range of AI safety issues, even if it does not use that phrase." [27]

The RAND Corporation published a primer for biosecurity researchers noting that the plan "plants a flag in the sand by establishing model evaluation as a new and rapidly evolving science" serving the government's need to understand frontier model risks in domains such as CBRN threats. RAND highlighted that the plan represented continuity with the prior administration's policies in the biological threat domain, while reframing the overall emphasis from "safety" to "opportunity," a shift previously signaled by the renaming of the U.S. AI Safety Institute as the Center for AI Standards and Innovation (CAISI). [28]

The Johns Hopkins Center for Health Security welcomed the inclusion of nucleic acid synthesis screening requirements and CAISI-led frontier model evaluations but recommended that CAISI adopt a risk-based approach prioritizing pandemic-level biological threats rather than running dozens of broad evaluations, arguing that the latter approach risked being costly and unsustainable while potentially missing the most consequential risks. [29] The Council on Strategic Risks praised the plan's biosecurity sections and urged the administration to set a timeline of nine months or less for moving from discussion to implementation of frontier model evaluations for national security risks. [30]

International context

The Action Plan was released amid a broader global contest over AI governance. The European Union had begun enforcing the EU AI Act and launched its AI Continent Action Plan in April 2025. Analysts at the Real Instituto Elcano characterized the U.S. approach as "largely hands-off" compared with the EU's risk-based regulatory framework, noting that the U.S. strategy had benefited its private sector, which led global private AI investment in 2024 with nearly $110 billion. However, the same analysis observed that the competitive dynamic risked overshadowing efforts on AI safety and risk management internationally. [31] The German Marshall Fund noted that the term "AI safety" had become "increasingly politically contentious" in the United States since 2025, though several federal policies, including the Action Plan's directives on high-risk AI use cases, continued to incorporate risk-management practices that resembled elements of the Biden-era approach. [32]

See also

Notes

  1. America's AI Action Plan, pp. 3–4.
  2. America's AI Action Plan, p. 4.
  3. America's AI Action Plan, pp. 4–5.
  4. America's AI Action Plan, p. 10.
  5. America's AI Action Plan, pp. 9–10.
  6. America's AI Action Plan, pp. 10–11.
  7. America's AI Action Plan, pp. 6–7.
  8. America's AI Action Plan, pp. 12–13.
  9. America's AI Action Plan, pp. 14–15.
  10. America's AI Action Plan, pp. 15–16.
  11. America's AI Action Plan, p. 16.
  12. America's AI Action Plan, pp. 18–19.
  13. America's AI Action Plan, p. 20.
  14. America's AI Action Plan, p. 20.
  15. America's AI Action Plan, pp. 21–22.
  16. America's AI Action Plan, pp. 22–23.
  17. America's AI Action Plan, p. 23.

References

  1. "White House Unveils America's AI Action Plan". The White House. July 23, 2025. Retrieved March 21, 2026.
  2. "Scaling Laws: Navigating AI Policy: Dean Ball on Insights from the White House". Lawfare. August 15, 2025. Retrieved March 21, 2026.
  3. "Transcript: Donald Trump's Address at 'Winning the AI Race' Event". Tech Policy Press. July 24, 2025. Retrieved March 21, 2026.
  4. 1 2 "White House Unveils New AI Action Plan". International Economic Development Council. July 30, 2025. Retrieved March 21, 2026.
  5. 1 2 "Reactions to Trump's AI Action Plan". Tech Policy Press. July 24, 2025. Retrieved March 21, 2026.
  6. 1 2 "White House Releases AI Action Plan: Key Legal and Strategic Takeaways for Industry". Skadden, Arps, Slate, Meagher & Flom. July 2025. Retrieved March 21, 2026.
  7. "Executive Order 14179 of January 23, 2025, "Removing Barriers to American Leadership in Artificial Intelligence"" (PDF). Federal Register. January 31, 2025. Retrieved March 21, 2026.
  8. "Request for Information on the Development of an Artificial Intelligence (AI) Action Plan". Federal Register. February 6, 2025. Retrieved March 21, 2026.
  9. "American Public Submits Over 10,000 Comments on White House's AI Action Plan". The White House. April 24, 2025. Retrieved March 21, 2026.
  10. "OpenAI Response to OSTP/NSF RFI on the Development of an Artificial Intelligence (AI) Action Plan" (PDF). OpenAI. March 13, 2025. Retrieved March 21, 2026.
  11. "Google recommendations for the U.S. AI Action Plan". Google. March 13, 2025. Retrieved March 21, 2026.
  12. "Anthropic's Recommendations to OSTP for the U.S. AI Action Plan". Anthropic. March 6, 2025. Retrieved March 21, 2026.
  13. "Executive Order 14110 of October 30, 2023, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence"" (PDF). Federal Register. Retrieved March 21, 2026.
  14. "President Trump's Artificial Intelligence (AI) Action Plan Takes Shape as NSF, OSTP Seek Comments". Epstein Becker Green. February 2025. Retrieved March 21, 2026.
  15. "Trump AI plan and orders aim to deregulate, police bias and compete globally". FedScoop. July 24, 2025. Retrieved March 21, 2026.
  16. 1 2 "White House Launches AI Action Plan and Executive Orders to Promote Innovation, Infrastructure, and International Diplomacy and Security". Wiley Rein. July 2025. Retrieved March 21, 2026.
  17. "How Trump's AI Action Plan and Executive Orders Will Impact U.S. Technology and Federal Procurement". Orrick, Herrington & Sutcliffe. August 2025. Retrieved March 21, 2026.
  18. 1 2 "Preemption is No Panacea: Congress Must Create a Workable National Framework for American AI Dominance". Corporate Compliance Insights. February 5, 2026. Retrieved March 21, 2026.
  19. "Trump administration unveils national AI policy framework to limit state power". CNBC. March 20, 2026. Retrieved March 21, 2026.
  20. "The Trump Administration's 2025 AI Action Plan". Sidley Austin. July 30, 2025. Retrieved March 21, 2026.
  21. "White House unveils comprehensive AI strategy". White & Case. July 2025. Retrieved March 21, 2026.
  22. ""Winning the Race: America's AI Action Plan" – Key Pillars, Policy Actions, and Future Implications". Ropes & Gray. July 31, 2025. Retrieved March 21, 2026.
  23. "The Opportunities and Risks of Trump's AI Action Plan". Council on Foreign Relations. July 24, 2025. Retrieved March 21, 2026.
  24. "What to make of the Trump administration's AI Action Plan". Brookings Institution. October 28, 2025. Retrieved March 21, 2026.
  25. Tom Wheeler (July 31, 2025). "Trump's executive orders politicize AI". Brookings Institution . Retrieved March 21, 2026.
  26. "What you may have missed about Trump's AI Action Plan". MIT Technology Review. July 29, 2025. Retrieved March 21, 2026.
  27. "National Security and the AI Action Plan: A Deep Dive". Steptoe & Johnson. July 2025. Retrieved March 21, 2026.
  28. "Dissecting America's AI Action Plan: A Primer for Biosecurity Researchers". RAND Corporation. August 11, 2025. Retrieved March 21, 2026.
  29. "Biosecurity Guide to the AI Action Plan". Johns Hopkins Center for Health Security. July 2025. Retrieved March 21, 2026.
  30. "Review: Biosecurity Enforcement in the White House's AI Action Plan". Council on Strategic Risks. July 28, 2025. Retrieved March 21, 2026.
  31. "Risk without borders: the malicious use of AI and the EU AI Act's global reach". Real Instituto Elcano. 2026. Retrieved March 21, 2026.
  32. "Can the Transatlantic Community Align on AI Safety?". German Marshall Fund. 2026. Retrieved March 21, 2026.