| Winning the Race: America's AI Action Plan | |
| | |
| President | Donald Trump |
|---|---|
| Signed | July 23, 2025 |
| Summary | |
| Sets out more than 90 federal policy actions across three pillars to maintain U.S. global leadership in artificial intelligence | |
"Winning the Race: America's AI Action Plan" is a policy blueprint published by the White House on July 23, 2025, setting out more than 90 federal policy actions intended to secure United States dominance in artificial intelligence. [1] The 28-page document was developed by the Office of Science and Technology Policy (OSTP), with Dean Ball, then serving as OSTP's Senior Policy Advisor for AI and Emerging Technology, as its primary staff drafter. [2] It was released at a summit titled "Winning the AI Race," hosted by the Hill and Valley Forum and the All-In podcast at the Andrew W. Mellon Auditorium in Washington, D.C. [3] President Donald Trump signed three accompanying executive orders at the event. [4]
The plan was widely described as a significant departure from the Biden administration's AI executive order, which Trump had revoked on his first day in office. [5] Where the Biden approach had emphasized safety, risk management, and equity, the Action Plan focused on deregulation, infrastructure expansion, international competition with China, and the removal of what the administration characterized as ideological bias from AI systems. [6]
On January 23, 2025, Trump signed Executive Order 14179, titled "Removing Barriers to American Leadership in Artificial Intelligence," which directed the development of an AI Action Plan to "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security." [7] On February 6, 2025, the National Science Foundation's Networking and Information Technology Research and Development (NITRD) program, acting on behalf of OSTP, published a Request for Information (RFI) in the Federal Register, inviting comment from the public, academia, industry, and governments by March 15, 2025. [8]
OSTP published more than 10,000 public comments in April 2025. [9] Notable respondents included OpenAI, which proposed a five-part strategy encompassing regulatory preemption, export controls, copyright protections, infrastructure investment, and government adoption; [10] Google, which advocated for energy infrastructure reform, innovation-friendly international approaches, and continued federal research funding; [11] and Anthropic, which focused on national security testing of frontier models, strengthening export controls, enhancing the security of AI labs, and scaling energy infrastructure. [12]
The Action Plan represented the second Trump administration's replacement for the regulatory framework established by Executive Order 14110 (October 2023), which had directed federal agencies to develop safety standards, require reporting from developers of powerful AI systems, and address algorithmic discrimination. [13] Trump had revoked that order on January 20, 2025. During his first term, Trump had issued Executive Order 13859 (February 2019), "Maintaining American Leadership in Artificial Intelligence," which similarly prioritized U.S. competitiveness but with a narrower scope. [14]
The Action Plan is organized around three pillars, with cross-cutting themes of workforce empowerment, freedom from ideological bias in AI systems, and prevention of misuse by adversaries.
The first pillar contains policy recommendations aimed at ensuring the United States leads in both the development and application of AI systems. Key provisions include:
The second pillar addresses energy, data centers, semiconductors, cybersecurity, and workforce development for physical AI infrastructure.
The third pillar covers export promotion, export controls, international governance, and national security evaluation of frontier models.
On the same day the Action Plan was released, Trump signed three executive orders to begin implementation: [4] [15]
On December 11, 2025, Trump signed an additional executive order titled "Ensuring a National Policy Framework for Artificial Intelligence," which directed the Department of Justice to establish an AI Litigation Task Force to challenge state AI laws in court, instructed Commerce to identify state laws deemed "onerous" within 90 days, and threatened to withhold BEAD program funding from states with AI regulations the administration considered conflicting with federal policy. [18]
On March 20, 2026, the White House released a legislative framework outlining seven areas where it sought congressional action: child safety, community protections, intellectual property, free speech, innovation, workforce development, and federal preemption of state AI laws. The framework was developed by OSTP Director Michael Kratsios and White House Special Advisor David O. Sacks. [19]
Efforts to preempt state AI regulation through legislation faced repeated setbacks. A proposed 10-year moratorium on state-level AI regulation was included in the One Big Beautiful Bill Act but was stripped by the Senate in a 99–1 vote following opposition from a coalition of 40 state attorneys general and 260 state legislators. [20] A similar provision in the National Defense Authorization Act for fiscal year 2026 also failed. [18]
Major technology industry groups expressed support. TechNet called the plan a "policy framework [that] takes critical steps towards developing a strong domestic workforce, building critical AI infrastructure, launching public-private partnerships, removing regulatory barriers to innovation, strengthening the domestic AI stack, and enhancing U.S. global AI diplomacy." [5] Law firms including Skadden, White & Case, and Ropes & Gray published client advisories highlighting potential business opportunities in AI exports, data center permitting, and federal procurement, while also noting uncertainties around how "ideological bias" would be defined and enforced in practice. [6] [21] [22]
The Council on Foreign Relations published a multi-author assessment characterizing the plan as "a tale of two impulses within the administration." Contributors praised the inclusion of frontier model evaluations for national security risks but noted that it was unclear what would happen when evaluations found that a model had crossed a capability threshold, observing that there was "no shared understanding of what those mitigations should entail or who decides what qualifies as sufficient." Defense analysts highlighted the recommendation for an AI and Autonomous Systems Virtual Proving Ground but questioned whether the administration would follow through on resourcing. The CFR analysis also pointed to tensions between the goal of countering Chinese influence in international organizations and the administration's broader policies of withdrawing personnel and funding from multilateral institutions. [23]
The Brookings Institution published an extensive critique arguing that the plan gave "insufficient attention to accountability, ethics, and transparency," creating risks related to "unregulated AI systems, erosion of privacy, algorithm bias, polarization, misinformation, exploitative surveillance, unchecked corporate control over critical technologies, [and] unintended consequences on democratic governance." Brookings scholars also noted the tension between the plan's sweeping mandates for the National Science Foundation (e.g., leading new research labs, expanding NAIRR, developing testbeds) and the simultaneous defunding and destabilization of NSF under the administration, including the cancellation of more than 1,600 active grants. Separately, the Brookings analysis criticized the plan for lacking a domestic competition policy to prevent concentration among a small number of dominant AI firms. [24]
Brookings fellow Tom Wheeler argued that the "Preventing Woke AI" executive order amounted to "top-down censorship" and drew comparisons to content control practices in China, where AI outputs must align with the official ideology of the Chinese Communist Party. Wheeler and other commentators noted the vagueness of the term "ideological bias" and argued that the policy could burden smaller AI developers disproportionately, since large firms could absorb the compliance costs more easily. [25]
MIT Technology Review noted that, compared with Biden-era executive orders, the Action Plan was "mostly devoid of anything related to making AI safer," with the notable exception of the deepfake provisions. [26]
The plan's provisions on frontier model evaluation and biosecurity drew substantive engagement from national security researchers and institutions. The law firm Steptoe observed that, despite avoiding the phrase "AI safety," the Action Plan included language and provisions that would be "familiar to experts from the AI safety world," including sections on interpretability, AI control, model evaluation, and CBRNE risks, and suggested that the administration was "indeed concerned by a range of AI safety issues, even if it does not use that phrase." [27]
The RAND Corporation published a primer for biosecurity researchers noting that the plan "plants a flag in the sand by establishing model evaluation as a new and rapidly evolving science" serving the government's need to understand frontier model risks in domains such as CBRN threats. RAND highlighted that the plan represented continuity with the prior administration's policies in the biological threat domain, while reframing the overall emphasis from "safety" to "opportunity," a shift previously signaled by the renaming of the U.S. AI Safety Institute as the Center for AI Standards and Innovation (CAISI). [28]
The Johns Hopkins Center for Health Security welcomed the inclusion of nucleic acid synthesis screening requirements and CAISI-led frontier model evaluations but recommended that CAISI adopt a risk-based approach prioritizing pandemic-level biological threats rather than running dozens of broad evaluations, arguing that the latter approach risked being costly and unsustainable while potentially missing the most consequential risks. [29] The Council on Strategic Risks praised the plan's biosecurity sections and urged the administration to set a timeline of nine months or less for moving from discussion to implementation of frontier model evaluations for national security risks. [30]
The Action Plan was released amid a broader global contest over AI governance. The European Union had begun enforcing the EU AI Act and launched its AI Continent Action Plan in April 2025. Analysts at the Real Instituto Elcano characterized the U.S. approach as "largely hands-off" compared with the EU's risk-based regulatory framework, noting that the U.S. strategy had benefited its private sector, which led global private AI investment in 2024 with nearly $110 billion. However, the same analysis observed that the competitive dynamic risked overshadowing efforts on AI safety and risk management internationally. [31] The German Marshall Fund noted that the term "AI safety" had become "increasingly politically contentious" in the United States since 2025, though several federal policies, including the Action Plan's directives on high-risk AI use cases, continued to incorporate risk-management practices that resembled elements of the Biden-era approach. [32]