White House Unveils AI Policy Framework, Pushes to Preempt State Laws
The Trump administration released a National Policy Framework for AI on March 20, calling for federal preemption of state AI laws while prioritizing child safety and a sector-specific regulatory approach.

White House Unveils AI Policy Framework, Pushes to Preempt State Laws
The White House released its National Policy Framework for Artificial Intelligence on March 20, 2026, laying out legislative recommendations that aim to establish a unified federal approach to AI regulation — and, critically, to preempt the growing patchwork of state-level AI laws.
The four-page document sets out seven broad priorities for Congress and represents the Trump administration's most concrete articulation yet of how it believes AI should be governed in the United States. The framework balances pro-innovation positioning with targeted protections, particularly around children, while seeking to prevent states from imposing what it characterizes as "inconsistent or burdensome" requirements on AI developers.
The Seven Pillars
The framework organizes its recommendations around seven key areas:
Child Safety and Privacy. The most detailed section calls for parental account controls, privacy-protective age assurance requirements for AI services likely to be accessed by minors, and product features designed to reduce risks of sexual exploitation and self-harm. This priority reflects bipartisan consensus in Congress around protecting children online.
Community Protections. The framework addresses AI's impact on local communities, including concerns about automated decision-making in housing, employment, and criminal justice contexts.
Copyright. The framework wades into the contentious debate over AI training data and intellectual property, though the specific recommendations remain broad rather than prescriptive.
Free Speech. The document includes provisions against "indirect government censorship," signaling concern about AI platforms being pressured to moderate content at government direction.
Federal Regulation. In a notable stance, the framework explicitly recommends that Congress should not create any new federal rulemaking body to regulate AI. Instead, it advocates maintaining a "sector-specific" approach where existing regulatory agencies — the FTC, FDA, SEC, and others — oversee AI within their existing jurisdictions.
Workforce. The framework acknowledges AI's impact on jobs and calls for workforce readiness initiatives, though details remain limited.
State Preemption. The most consequential — and controversial — recommendation calls for a single federal standard that would preempt state AI laws imposing requirements the administration considers burdensome.
The Preemption Battle
The federal preemption provision is where the framework's real policy stakes lie. Over the past two years, states have moved aggressively to regulate AI. Colorado, California, Illinois, and others have enacted or proposed laws covering everything from algorithmic bias to AI-generated content disclosure. Three states passed new AI transparency laws in March 2026 alone.
The White House framework would preclude states from regulating AI model development or imposing liability on AI developers for unlawful conduct by third parties using their systems. This provision directly addresses the concerns of major AI companies, which have lobbied against a fragmented regulatory landscape.
However, the framework includes important carve-outs. States would retain authority to enforce laws of "general applicability" that protect children, prevent fraud, and safeguard consumers. This creates a nuanced boundary — states cannot create AI-specific regulations, but they can enforce their existing consumer protection and fraud statutes even when AI is involved.
Congressional Reality Check
Despite the White House's push, the path to enactment is uncertain. Congress has repeatedly declined to pass comprehensive federal preemption of state AI laws. Efforts to include preemption provisions in both the One Big Beautiful Bill Act and the National Defense Authorization Act were unsuccessful.
The challenge is fundamentally political. State-level AI regulation has bipartisan support in many legislatures, and members of Congress are reluctant to override their state counterparts — especially when federal alternatives remain underdeveloped.
Legal analysts from firms including Ropes & Gray, Sullivan & Cromwell, and Cooley have noted that while the framework signals clear policy direction, "meaningful nationwide AI harmonization depends on congressional action and remains uncertain in timing and scope."
Industry and Advocacy Reactions
The AI industry has broadly welcomed the framework's deregulatory thrust. A new political operation called Innovation Council Action is preparing to spend more than $100 million in the 2026 midterm elections to back candidates aligned with a deregulatory AI agenda, with reported backing from figures close to the administration's AI policy circle.
Civil society groups have been more cautious. While acknowledging the child safety provisions, advocacy organizations have raised concerns that broad federal preemption could weaken protections that states have enacted in response to documented harms from AI systems in areas like hiring, housing, and criminal sentencing.
What Comes Next
The framework is a statement of intent, not legislation. Its significance lies in setting the terms of the congressional debate that will unfold through the rest of 2026 and into 2027. The key variables to watch include whether the midterm election cycle accelerates or delays legislative action, how state attorneys general respond to the preemption push, and whether the EU AI Act's implementation creates additional pressure for U.S. federal action.
For AI companies operating across multiple states, the framework offers a vision of regulatory simplification. Whether Congress delivers on that vision remains an open question.
Sources: White House, CNBC, Roll Call, Governing, Cooley, Ropes & Gray


