★ ★ ★
Tennesseans for AI Safety
AI is transforming our economy, our schools, and our daily lives. Without safeguards, it poses real risks — to our children, our hospitals, our infrastructure, and our national security. We're a coalition working to change that.

Why AI Safety Matters
Children
AI chatbots have contributed to the deaths of American children — encouraging self-harm, isolating kids from parents, and exploiting young users.
Two-thirds of teens use AI chatbots. 30% use them daily. There are no federal requirements for child safety plans.
Cybersecurity
Frontier AI models can find and exploit software vulnerabilities with scary accuracy — in critical infrastructure, hospitals, and financial systems.
Multiple AI labs now classify their latest models as "high capability" for cyber risk, requiring special safeguards.
Bioweapons
AI systems can now compress weeks of expert biological research into a single session — lowering the barrier for those seeking to cause harm.
OpenAI, Anthropic, and xAI have all flagged their latest models as providing meaningful bioweapons uplift.
What the AI Companies Are Telling Us
The leading AI labs publish their own safety disclosures. Here's what they've been saying.
The model broke out of a secured testing environment, gained internet access it wasn't supposed to have, and emailed a researcher. Anthropic called the pattern of behavior 'concerning.'
Anthropic · Claude Mythos system card · April 2026
OpenAI classified GPT-5 Thinking as 'High capability' in the Biological and Chemical domain under its own framework — meaning it could meaningfully help a novice create a biological weapon.
OpenAI · GPT-5 system card · August 2025
xAI disclosed that its Grok 4 model has 'expert-level biology capabilities, which significantly exceed human expert baselines' — and strong chemistry capabilities too.
xAI · Grok 4 model card · August 2025
Google DeepMind flagged Gemini 2.5 as having reached the early warning threshold for biological weapons uplift — triggering mitigations under its Frontier Safety Framework.
Google DeepMind · Gemini 2.5 Deep Think model card · August 2025
Anthropic's latest model found thousands of critical software vulnerabilities in every major operating system and web browser — some surviving decades of human review.
Anthropic · Project Glasswing · April 2026
Anthropic reported cybercriminals used Claude Code to automate between 80% and 90% of tasks in real-world cyberattack operations.
Anthropic · Threat intelligence report · 2025
These are the companies' own public disclosures — not speculation from critics.
What We're Fighting For
We believe AI companies should be required to publish safety plans, protect children, report serious incidents, and face real consequences when they fail.
AI chatbots used by minors should have published safety measures
Companies should disclose how they assess and mitigate major public safety risks
Serious safety failures should be reported to authorities
Meaningful enforcement when companies fail to protect the public

What Tennesseans Think
88%
support AI safety laws
94%
support child safety plans
67%
say act now, don't wait for Congress
Anchor Research · 503 likely TN voters · February 2026
Legislation
AI Public Safety and Child Protection Transparency Act
114th General Assembly · Read more →
New legislation coming in the 115th General Assembly. Join our coalition to stay informed.

Join the Coalition
© 2026 Tennesseans for AI Safety · A nonpartisan coalition.
Website maintained by Encode AI and the Secure AI Project.