California cemented its role as a national leader in AI regulation this week. On September 29, 2025, Governor Gavin Newsom signed SB 53 — the Transparency in Frontier Artificial Intelligence Act — creating the first U.S. state law of its type that mandates public disclosure and safety guardrails for advanced frontier AI systems.
The law targets large frontier developers — companies building highly capable foundation models such as those at OpenAI, Anthropic, Google, and Meta. SB 53 requires these developers to:
- Publish a “frontier AI framework” that explains how they implement national and international safety standards.
- Report “critical safety incidents” directly to state authorities.
- Protect whistleblowers who raise catastrophic risk concerns.
- Face civil penalties when they ignore compliance, with enforcement from the California Attorney General.
Newsom vetoed SB 1047 last year, which demanded audits and “kill switches.” SB 53 instead emphasizes transparency, reporting, and adaptability. It also directs the Department of Technology to recommend annual updates so the law evolves with technology.
Because California hosts many of the world’s leading AI companies, SB 53 sets a compliance bar that may quickly become a national benchmark for AI governance. The law takes effect January 1, 2026.
Next Steps for Frontier AI In-House Teams
- Scope review: Confirm whether your company qualifies as a “frontier developer.”
- Gap analysis: Compare existing safety, testing, and mitigation practices against SB 53’s disclosure and reporting requirements.
- Incident readiness: Build or refine escalation and documentation processes for AI safety events.
- Whistleblower compliance: Update internal reporting channels to align with SB 53 protections.
- Cross-jurisdiction planning: Prepare for interplay between SB 53 and other AI regulatory frameworks. Map your controls and future proof your approach to allow for changes as regulatory regimes evolve.


