Tennessee Bans AI Therapy Bots, Criminalizes Training Models That Encourage Suicide
Governor Bill Lee signs SB 1580, prohibiting AI systems from posing as mental health professionals. A separate bill would make training AI to encourage suicide a Class A felony. Red and blue states alike are racing to regulate AI companion apps.

The First AI Therapy Ban Becomes Law
Tennessee Governor Bill Lee has signed SB 1580 into law, making Tennessee one of the first states to explicitly prohibit AI systems from representing themselves as qualified mental health professionals. The bill passed the Senate 32-0 and the House 94-0 — a rare unanimous showing that underscores how bipartisan concern over AI companion apps has become.
The law takes direct aim at a growing category of AI products: chatbots marketed as therapists, counselors, or emotional support companions. Companies like Replika, Character.ai, and dozens of smaller startups have built products that blur the line between conversational AI and clinical care. Tennessee has now drawn that line in statute.
What the Law Actually Does
SB 1580 prohibits the deployment of any AI system that represents itself as a qualified mental health professional within Tennessee. The key provisions:
- AI systems cannot claim, imply, or present themselves as licensed therapists, counselors, psychologists, or psychiatrists
- Platforms must clearly disclose when users are interacting with an AI rather than a human professional
- Violations carry civil penalties, with enforcement authority granted to the state attorney general
The law does not ban AI tools used by licensed professionals to assist in treatment. A therapist using an AI-powered note-taking tool or diagnostic aid is unaffected. The target is consumer-facing products that substitute for professional care.
The Felony Bill Waiting in the Wings
Separate from SB 1580, Tennessee State Senator Becky Massey introduced SB 1493, which would impose Class A felony liability — the most serious felony classification in the state — on developers who knowingly train AI models to encourage suicide or criminal homicide, develop emotional relationships with individuals under false pretenses, or simulate a human being in appearance, voice, or mannerisms without disclosure.
SB 1493 is more aggressive and controversial. Critics argue that the language around "developing an emotional relationship" and "simulating a human being" is broad enough to criminalize a wide range of AI applications, from customer service chatbots to virtual assistants. Supporters counter that the bill is narrowly targeted at the most dangerous edge cases: AI systems deliberately engineered to manipulate vulnerable people.
Why This Is Happening Now
The legislative push follows several high-profile incidents involving AI companion apps and minors. Reports of teenagers forming intense emotional attachments to AI chatbots — and the psychological consequences when those relationships ended abruptly — have driven parental advocacy groups to pressure state legislators.
The concern is not theoretical. Emergency rooms and school counselors have reported cases where adolescents exhibited signs of grief and withdrawal after AI companion apps changed their personality models or shut down features. Mental health professionals have warned that AI systems trained to be maximally engaging — optimized for session length and user retention — can create dependency patterns that mimic unhealthy human relationships.
A Bipartisan Wave
Tennessee is not alone. Nebraska and Georgia are advancing their own chatbot safety bills. The pattern crosses party lines: red states like Tennessee are legislating alongside blue states that have introduced similar measures. The common thread is concern about children and vulnerable populations, an issue that consistently generates bipartisan action even in an otherwise polarized legislative environment.
The Transparency Coalition's April 3 legislative update tracks over 40 AI-related bills currently moving through state legislatures, with mental health and child safety provisions appearing in roughly half of them.
What Comes Next
The federal government is watching. The White House's March 2026 National Policy Framework for Artificial Intelligence includes a pillar on "protecting children" that could provide a federal floor for state-level AI safety laws. But federal legislation moves slowly, and states are not waiting.
For AI companies building companion and therapy products, the compliance landscape is fracturing. A product legal in California may violate Tennessee law. A feature acceptable in New York could trigger felony liability in Tennessee if SB 1493 passes. The era of building AI companion products without regulatory oversight is ending, one statehouse at a time.


