Factual. Independent. Impartial.
Support AAP with a free or paid subscription
Finance
Andrea Shalal and Jeffrey Dastin and Ryan Patrick Jones

Trump says US government to end Anthropic software use

The AI company Anthropic has faced backlash from the administration of US President Donald Trump. (AP PHOTO)

US President Donald Trump is directing the government to stop work with Anthropic, and the Pentagon says it will declare the startup a supply-chain ‌risk, dealing a major blow to the artificial intelligence lab after a showdown about technology guardrails.

Trump added there would be a six-month phase-out for the Defence Department and other agencies that use the company's products. 

"I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need ‌it, we don’t want it, and will not do business with them again!" Trump said in a post on Truth Social.

Trump's directive came during a weeks-long feud between the Pentagon and the San Francisco-based startup over concerns about how the military could use AI at war.

Trump's decision stopped short of threats issued by the Pentagon, including that it could invoke the Defence Production Act to ‌require Anthropic's compliance. ‌The Pentagon had also said ⁠it considered making Anthropic a supply-chain risk, a designation that previously targeted businesses tied to foreign adversaries.

But Trump vowed ​further action if Anthropic did not co-operate with the phaseout. Trump warned he would use "the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow" if Anthropic did not help in the phaseout period.

The moves further set a precedent that US law alone would constrain how AI is deployed on the battlefield, with the Pentagon seeking to preserve all flexibility in defence and not be limited by warnings from the technology's creators against powering weapons with unreliable AI.

In a statement, Anthropic said it would challenge any risk designation in court by the Department of Defence, which the Trump Administration has renamed the Department of War.

"We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government," the company said.

"No amount of ‌intimidation or punishment from the ‌Department of War will change our position ⁠on mass domestic surveillance or fully autonomous weapons."

The setback comes as AI leader Anthropic raced to win a fierce competition selling novel technology to businesses and government, particularly for national security, ahead of its widely expected initial public offering. 

The company has said it has not finalised an IPO decision.

At the same ⁠time, the battle over technological guardrails had raised concerns that the Department of Defence ‌would follow US law ​but little other constraint when deploying AI for national-security missions, regardless of safety or ethics service terms embraced by the technology's developers.

Anthropic had sought guarantees that its ​AI would not ‌be used for fully autonomous weapons or for mass domestic surveillance - applications in which the Pentagon has said it had no interest.

Anthropic was the first frontier AI ​lab to put its models on classified networks via cloud provider Amazon.com and the first to build customised models for national security customers, the startup has said.

Its product Claude is in use across the intelligence community and armed services.

The conflict is the latest eruption in a saga that dates back at least to 2018. That year, employees at Alphabet's Google protested the Pentagon's use of the company's AI to analyse drone footage, straining relations between Silicon Valley and Washington. 

A rapprochement ensued, with companies including Amazon and Microsoft jousting for defence business, and still more CEOs pledging co-operation last year with ​the Trump administration.

But theoretical "killer robots" have remained a concern held by human-rights and technology activists. At the same time, Ukraine and Gaza have become theatres ​for increasingly automated systems on the battlefield.

License this article

Sign up to read this article for free
Choose between a free or paid subscription to AAP News
Start reading
Already a member? Sign in here
Top stories on AAP right now