Anthropic CEO clarifies U.S. AI stance, backs national standards and China restrictions
Anthropic CEO Dario Amodei has outlined the company’s position on U.S. AI leadership, calling for a unified national framework, addressing claims about policy bias, and reaffirming restrictions on services to China.
Anthropic CEO Dario Amodei has issued a public statement clarifying the company’s position on American AI leadership and policy alignment, following recent claims about its government relations and regulatory stance. The announcement reaffirms Anthropic’s cooperation with the Trump administration on defense, education, and energy initiatives while outlining areas of disagreement over proposed AI regulation.
Amodei said the company’s focus remains on ensuring AI development benefits the public while advancing U.S. global competitiveness. “AI should be a force for human progress, not peril,” he said. “That means making products that are genuinely useful, speaking honestly about risks and benefits, and working with anyone serious about getting this right.”
The company, which develops the Claude family of AI models, said it is working with multiple U.S. agencies to strengthen national capabilities. Amodei confirmed that “in July the Department of War awarded Anthropic a two-year, $200 million agreement to prototype frontier AI capabilities that advance national security.”
He added that Anthropic has “partnered with the General Services Administration to offer Claude for Enterprise and Claude for Government for $1 across the federal government,” and noted that the Claude system is “deployed across classified networks through partners like Palantir and at Lawrence Livermore National Laboratory.”
Push for a national AI standard
Amodei reiterated Anthropic’s support for a unified federal AI standard over state-by-state regulation. “Our longstanding position has been that a uniform federal approach is preferable to a patchwork of state laws,” he said.
While advocating for federal policy, Anthropic supported California’s SB 53, a state bill requiring the largest AI companies to publish their frontier model safety protocols. The law exempts smaller companies with annual revenue under $500 million. Amodei said the company “supported this exemption to protect startups and in fact proposed an early version of it.”
He added that AI governance “should be a matter of policy over politics,” emphasizing collaboration with both parties in developing federal standards.
National security focus and China restrictions
Amodei also highlighted Anthropic’s decision to limit AI access for Chinese-controlled companies. “We are the only frontier AI company to restrict the selling of AI services to PRC-controlled companies, forgoing significant short-term revenue to prevent fueling AI platforms and applications that would help the Chinese Communist Party's military and intelligence services,” he said.
He argued that the greater risk to U.S. AI leadership is “filling the PRC’s data centers with U.S. chips they can’t make themselves.”
Addressing model bias and neutrality
Amodei dismissed recent claims that Anthropic’s models display political bias, citing external reviews from Stanford University and the Manhattan Institute. “A January study from the Manhattan Institute found Anthropic’s main model (Claude Sonnet 3.5) to be less politically biased than models from most of the other major providers,” he wrote.
He added that while complete neutrality may be impossible, Anthropic continues to reduce bias through model updates. “No AI model, from any provider, is fully politically balanced in every reply. Models learn from their training data in ways that are not yet well-understood, and developers are never fully in control of their outputs.”
Revenue growth and product restraint
Amodei said Anthropic’s revenue has grown from a $1 billion to a $7 billion run rate over the last nine months. He noted that the company’s growth has come without compromising on safety. “There are products we will not build and risks we will not take, even if they would make money,” he said.
Amodei closed by reaffirming his alignment with recent government comments on balancing innovation and risk: “The Vice President said of AI, ‘Is it good or is it bad, or is it going to help us or going to hurt us? The answer is probably both, and we should be trying to maximize as much of the good and minimize as much of the bad.’ That perfectly captures our view.”