OpenAI’s Chris Lehane, Chief Global Affairs Officer, calls for unified national AI safety standards

OpenAI Chief Global Affairs Officer Chris Lehane has taken to LinkedIn to argue that national safety standards for frontier AI models should be set at federal level rather than through emerging state laws.

OpenAI’s Chief Global Affairs Officer Chris Lehane took to LinkedIn to weigh in on the national debate over how frontier AI models should be regulated.

OpenAI develops advanced AI systems used in enterprise, research, and commercial environments, and Lehane’s comments focus on how the United States can establish a coherent approach to frontier model safety while maintaining its innovation lead. He frames the discussion around a single priority, stating that “deploying frontier models safely and in a way that best positions the US to maintain its innovation lead” should guide regulatory decisions.

Lehane’s post responds to ongoing uncertainty over whether federal legislation, state action, or executive authority should be the primary mechanism for setting frontier safety standards.

Federal testing raised as essential for prevention-based safety

Lehane argues that only the federal government has access to the classified systems required to test frontier models in ways that prevent harm before deployment. He writes that “frontier models are tested for their safety on classified systems, which only the federal government has access to (states, companies, and nonprofits don’t have such access).”

He also outlines how OpenAI already participates in these federal processes. Lehane states, “At OpenAI, we created a publicly available preparedness framework, and then we became one of the first AI labs to enter into a voluntary agreement with the federal government to conduct tests of our models via the Center for AI Standards and Innovation (CAISI).” CAISI was originally created under the Biden Administration and later updated by the Trump Administration.

According to Lehane, this federal capability is what enables a prevention-first model rather than relying on accountability only after harm has occurred.

Concerns raised over emerging state approaches

Lehane notes that several states have moved ahead with their own frontier safety laws, but he emphasizes their structural limitations. While acknowledging that “these laws have some positive benefits,” he states that their reliance on liability makes them reactive rather than preventative. In his words, state laws “are all based on a liability approach (hold a company accountable after harm has occurred) and not a prevention approach (stopping the harm from happening in the first place).”

He argues that because state authorities cannot access classified systems for safety testing, they cannot deliver the type of evaluation required to prevent risks associated with frontier models. This, he suggests, leads to inconsistent requirements while failing to address core safety needs.

Three pathways proposed for establishing a national standard

Lehane outlines three possible mechanisms for creating a unified national safety framework while avoiding unnecessary regulatory burdens for small AI companies. The first is federal legislation enabling frontier model testing through CAISI and establishing national standards while allowing states to continue legislating in areas outside frontier safety.

The second pathway involves states voluntarily aligning their requirements with federal testing routes. Lehane notes that “California has already taken a step in this direction,” and suggests that if New York were to join, “the combination of those two big states could create a national standard (a kind of ‘reverse federalism’).”

The third pathway is an executive order that exempts companies participating in voluntary CAISI testing and reporting from state-level frontier safety rules.

Lehane argues that all three approaches support the same outcome, stating that “All three of these paths get us to our North Star: safely deploying our frontier models while keeping America’s innovation lead.”

The ETIH Innovation Awards 2026

The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.

Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.

Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.

Next
Next

OpenAI publishes State of Enterprise AI report as adoption accelerates