The TRUMP AMERICA AI Act: Federal Government Makes Its Biggest Push Yet to Preempt State AI Laws
Senator Blackburn's discussion draft and the White House's seven-pillar AI framework signal Washington's most coordinated attempt at comprehensive federal AI regulation, with state preemption as the central battleground.

The Federal Play
March 2026 marked the most coordinated push toward comprehensive federal AI regulation in American history. On March 18, Senator Marsha Blackburn released a discussion draft of the TRUMP AMERICA AI Act. Two days later, the White House issued a National Policy Framework for Artificial Intelligence organized around seven pillars. Together, these documents represent Washington's clearest signal that it intends to set the rules for AI at the federal level — and override the patchwork of state laws that have proliferated in the absence of federal action.
The Seven Pillars
The White House framework is built around seven principles:
- Protecting children from harmful AI interactions
- Safeguarding communities from AI-enabled threats
- Respecting intellectual property in AI training data
- Preventing censorship in AI systems
- Enabling innovation by reducing regulatory barriers
- Developing an AI-ready workforce through education and training
- Establishing federal preemption of state AI laws
The seventh pillar is the most consequential and the most contentious. Federal preemption would override the 40-plus state AI laws already on the books and prevent the hundreds of bills currently moving through state legislatures from taking effect — at least in areas where federal law applies.
Why Preemption Is the Central Fight
For AI companies, the compliance argument for preemption is straightforward. In 2025, all fifty states introduced AI-related legislation, with thirty-eight enacting some form of AI law. A company operating a nationwide AI product now faces a matrix of overlapping, sometimes contradictory requirements. Tennessee bans AI therapy bots. Colorado requires impact assessments for high-risk AI. California is considering mandatory watermarking of AI-generated content. Illinois regulates AI in hiring decisions.
A single federal standard would replace this patchwork with one set of rules. Industry groups have lobbied heavily for preemption, arguing that regulatory fragmentation slows innovation and creates compliance costs that disproportionately burden smaller companies.
The counterargument is equally clear: states have moved because the federal government has not. Consumer advocates and state attorneys general argue that preemption would eliminate protections that citizens currently have, replacing them with a federal standard that may be weaker, slower to update, and more influenced by industry lobbying.
What the TRUMP AMERICA AI Act Proposes
The discussion draft from Senator Blackburn covers a wide range of AI governance topics. While the full text is still being refined through stakeholder feedback, the key provisions include:
- Federal oversight authority for high-risk AI applications, potentially housed in an existing agency or a new coordinating body
- Transparency requirements for AI systems interacting with consumers
- Liability frameworks for AI-caused harms
- Safe harbor provisions for companies that follow approved AI risk management practices
- Explicit federal preemption language for areas covered by the act
The safe harbor provision is particularly significant. It would create a legal incentive for companies to adopt standardized risk management frameworks — essentially trading voluntary compliance for legal protection.
The Lobbying Landscape
The battle lines are predictable. Large AI companies favor federal preemption with industry-friendly standards. Smaller AI companies want preemption but worry about compliance costs under any federal regime. State attorneys general oppose preemption as a loss of enforcement authority. Consumer groups want strong federal standards but fear that preemption without them would be a net loss for public protection.
The less predictable dynamic is bipartisan interest. AI regulation does not split cleanly along party lines. Republican lawmakers have been motivated by concerns about AI censorship and child safety. Democratic lawmakers focus on algorithmic discrimination and worker displacement. Both parties see political upside in being seen as acting on AI — which is why the current legislative window may be unusually wide.
What Happens Next
A discussion draft is not a bill, and a bill is not a law. The TRUMP AMERICA AI Act faces a long road through committee markup, floor debate, reconciliation with House proposals, and presidential approval. The White House framework provides political cover but no enforcement mechanism on its own.
The more likely near-term outcome is targeted federal legislation on specific AI issues — child safety, deepfakes, critical infrastructure — rather than a single comprehensive bill. Comprehensive AI regulation has been the white whale of technology policy for three years. The current push is the closest Congress has come to catching it.
Meanwhile, states continue to legislate. Every week without federal action adds another state law to the patchwork. The longer Congress waits, the harder preemption becomes — both politically and practically. The clock is ticking on both sides.


