Skip to main content
Privacy and Cybersecurity Update

Unpacking the December 11, 2025 Executive Order: Ensuring a National Policy Framework for Artificial Intelligence

December 23, 2025

On December 11, 2025, President Trump issued a new Executive Order (EO) to protect American Artificial Intelligence (AI) innovation from “the most onerous and excessive laws emerging from the States that threaten to stymie innovation.” Consistent with the President’s July 2025 America’s AI Action Plan, the EO further indicates, “[i]t is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”

The EO arrives at a time of extensive state legislative activity concerning AI. Multiple sources, including the White House and the National Conference of State Legislatures, have noted that in 2025 more than 1,000 AI-related bills were introduced across all U.S. states and territories. While other sources, including the Future of Privacy Forum, have reported lower numbers based on different calculation metrics, it is undeniable that AI is an increasing area of focus for both Republican and Democratic lawmakers and that dozens of new state AI laws were enacted this year. Ultimately, the EO seeks to centralize AI policy by mobilizing the DOJ to identify and challenge “onerous” state AI laws, discourages state enactment and enforcement of AI laws that conflict with federal policy—including by conditioning federal broadband funding on policy alignment—and advances federal preemption through litigation and agency action while pressing Congress to enact a uniform national framework. In the near term, this fluid dynamic will underscore the need for adaptive compliance and governance programs and close attention to regulatory and litigation risk.

From Broad Moratorium to a Targeted Federal Strategy

Prior to the issuance of the EO, the Trump administration supported legislative proposals that would have imposed a sweeping 10-year moratorium on new state AI laws and curtailed enforcement of many existing statutes. The proposed moratorium included limited carveouts for laws that directly promoted innovation, such as AI incubators or infrastructure programs. Those proposals—introduced first in a draft of the One Big Beautiful Bill Act (signed into law on July 4, 2025) and later revived in discussions for inclusion in the National Defense Authorization Act—were omitted after bipartisan opposition.

Rather than imposing a categorical freeze on states, the EO adopts a more “as-applied” approach. It preserves space for state innovation in specific areas—such as child safety (Section 8(b)(i)), infrastructure development (Section 8(b)(ii)), and government procurement (Section 8(b)(iii))—while empowering the federal government to challenge state AI laws that the administration views as unduly burdensome (Section 2), ideologically driven (Section 4), or incompatible with the First Amendment (Section 4).

The administration has framed this shift as necessary to counter concerns of a splintering regulatory regime. In its place, the EO works to advance a national framework intended to “ensure that children are protected, censorship is prevented, copyrights are respected, communities are safeguarded,” and ultimately, that the United States “wins the AI race.” 

Challenging State AI Laws Through Federal Enforcement

The EO creates a new AI Litigation Task Force within the Department of Justice (the “Task Force”). The Task Force is charged with identifying and challenging state AI laws that conflict with federal policy, defined broadly as to “sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”

This standard affords substantial discretion to the Attorney General. The EO expressly contemplates litigation grounded in constitutional and statutory theories, including by challenging state laws on the grounds that they unlawfully infringe on national authority to regulate interstate commerce, are preempted by federal law, or violate the First Amendment (Section 3). To operationalize this approach, the Secretary of Commerce, working in consultation with the Special Advisor for AI and Crypto and senior White House policy officials, must publish, within 90 days, an evaluation identifying state AI laws deemed onerous and appropriate for referral to the AI Litigation Task Force.

Notably, the EO singles out state laws that could require AI systems to alter truthful outputs or compel disclosures that may infringe First Amendment protections or other constitutional rights as problematic. This language furthers a clear administration concern with state-level mandates that regulate AI content generation or impose viewpoint-based obligations on AI developers and deployers, and also continues a theme from the July 23, 2025 Executive Order 14319, “Preventing Woke AI in the Federal Government.”

The EO places existing omnibus state AI laws under scrutiny, potentially including those in California, Colorado, Texas, and Utah. The final text explicitly criticizes Colorado’s algorithmic discrimination statute for potentially compelling AI systems to “produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.” This critique may support the calls to amend the Colorado law, an effort that began last year and has resulted thus far in a delay of the effective date from February 1, 2026 to June 30, 2026. 

Conditioning Federal Funding to Influence State AI Policy

The EO also incorporates a favored mechanism of the Trump administration: conditional federal funding. Within 90 days, the Secretary of Commerce must issue a policy notice establishing eligibility requirements for the Broadband Equity, Access, and Deployment (BEAD) Program, a federal initiative aiming to increase access to high-speed internet across the country (Section 5). To the maximum extent permitted by law, states identified as having AI laws that conflict with federal policy may be rendered ineligible for certain funds.

Beyond the BEAD Program, the EO directs executive agencies (again in consultation with the Special Advisor for AI and Crypto) to review other discretionary grant programs (Section 5). Agencies are encouraged to consider conditioning grants either on a state’s agreement not to enact conflicting AI laws or on a binding commitment not to enforce such laws during the funding period. 

Federal Preemption Through the Federal Communications Commission and Federal Trade Commission 

The EO also advances a preemption strategy, and puts the Federal Communications Commission (FCC) and Federal Trade Commission (FTC) in the lead.

First, the FCC Chair is directed, within 90 days of the Commerce Secretary’s report, to initiate a proceeding to determine whether the FCC should adopt a federal reporting and disclosure standard for AI models that would preempt conflicting state laws (Section 6). This directive has prompted questions about the FCC’s institutional role in AI governance, with nearly two dozen state attorneys general filing a letter to the FCC on December 19, 2025, urging it to not issue preemptive AI regulations. Pursuant to the Communications Act of 1934, the FCC regulates “telecommunications services” and, to a more limited degree, “information services,” the latter of which are more likely to apply to AI given recent precedent. In January of this year, for example, the Sixth Circuit found that broadband internet access service (which is arguably closer in kind to telecommunications than AI) was an information service on the basis that “an ‘information service’ manipulates data, while a ‘telecommunications service’ does not.” 

Second, the FTC Chair is directed to issue a policy statement clarifying how Section 5 of the FTC Act applies to AI models, particularly with respect to preempting state laws that require alterations to truthful AI outputs (Section 7). This directive aligns with the FTC’s broader AI enforcement posture, including its ongoing “Operation AI Comply,” which focuses on deceptive AI claims, unfair data practices, and algorithmic harms. 

Encouraging Congressional Action

The EO continues the Administration’s call for preemptive federal AI legislation. The EO also requires the Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology to jointly prepare legislative recommendations for Congress that would preempt conflicting state laws (Section 8).

Importantly, the EO instructs that these recommendations should not preempt state authority in sensitive areas, including child safety, AI compute and data-center infrastructure (other than generally applicable permitting reforms), and state government procurement and use of AI (Section 8). This carveout appears designed to address concerns raised by certain Republican and Democratic opponents of earlier moratorium efforts.

Despite growing bipartisan attention to AI, federal legislative progress remains limited. To date, only one AI-specific federal statute has been enacted in 2025: the TAKE IT DOWN Act, signed by the President in May, which criminalizes the nonconsensual distribution of intimate images and imposes notice-and-removal obligations on covered platforms. Several Republican lawmakers, including U.S. Senator Ted Cruz (TX), who previously supported a 10-year moratorium of state and local AI laws, and members of House leadership, have praised the EO as a necessary interim step. 

The administration has framed this EO as a response to what it views as an urgent crisis: a rapidly fracturing AI regulatory landscape driven by state action that risks undermining economic growth, job creation, national security, and U.S. competitiveness vis-à-vis China. Officials emphasize that the EO builds on earlier AI-related executive actions in 2025 and the President’s America’s AI Action Plan released in July 2025, all aimed at ensuring that the United States maintains technological superiority in AI. For a more detailed analysis of America’s AI Action Plan, please see Sidley’s The Trump Administration’s 2025 AI Action Plan – Winning the Race: America’s AI Action Plan – and Related Executive Orders.

What Comes Next: Litigation and the Push for Legislation

Given the response from certain public-interest groups and state officials, and that the EO itself contemplates the possibility of litigation initiated by the federal government, the stage is set for court battles, although the timing and scope of litigation is highly contingent. Potential areas of litigation controversy include the preemptive sweep of various federal statutes, the extent to which the First Amendment constrains state regulatory authority over AI (and, perhaps, federal authority to intervene on particular subjects), the scope of executive authority to condition federal grants, and whether the Major Questions Doctrine limits agency authority to regulate while waiting on Congress to step in with framework legislation.

Ultimately, many stakeholders agree on one point: durable resolution will require federal legislation. A comprehensive national AI framework could provide legal certainty for businesses, protect consumers, mitigate systemic risks, and preserve constitutional values while sustaining innovation. In the meantime, companies will need to navigate a rapidly evolving AI legal landscape through nimble compliance and governance programs, while closely monitoring regulatory, litigation, and legislative developments

 

Attorney Advertising—Sidley Austin LLP is a global law firm. Our addresses and contact information can be found at www.sidley.com/en/locations/offices.

Sidley provides this information as a service to clients and other friends for educational purposes only. It should not be construed or relied on as legal advice or to create a lawyer-client relationship. Readers should not act upon this information without seeking advice from professional advisers. Sidley and Sidley Austin refer to Sidley Austin LLP and affiliated partnerships as explained at www.sidley.com/disclaimer.

© Sidley Austin LLP

ません Related Resources ません