US Issues Deadline to Anthropic in AI Safeguards Dispute

US Issues Deadline to Anthropic in AI Safeguards Dispute

US Secretary of Defense Pete Hegseth stated that he would eliminate Anthropic from the Department of Defense’s supply chain if the company refused to permit its artificial intelligence technology to be utilized for military purposes.

The threat was made at a Pentagon meeting on Tuesday, which Hegseth requested with Anthropic boss Dario Amodei.

Anthropic said in a statement, “We have continued good faith discussions regarding our use-of-services policy to ensure Anthropic can continue to support the government’s national security mission in a way that our models can reliably and responsibly.”

A senior Pentagon official said Anthropic had until Friday evening.

The conversation between Hegseth and Amodei was cordial, but Amodei raised what Anthropic considers its red lines.

These include engaging in autonomous kinetic operations, in which AI tools make final decisions on military targeting without human intervention.

The source said that using Anthropic tools for large-scale domestic surveillance is another red line.

But a Pentagon official said the current conflict between the agency and Anthropic has nothing to do with the use of autonomous weapons or mass surveillance. The official said that if Anthropic refuses to comply, Hegseth will ensure the Defense Production Act is enforced against the company.

This move could force Anthropic officials to grant the Pentagon unrestricted use of the equipment on national security grounds.

The official said the Pentagon will simultaneously label Anthropic as a supply chain risk.

An Anthropic spokesperson said that during the meeting with Hegseth, Amodei “praised the department’s work and thanked the secretary for their service.”

Anthropic is the maker of the AI ​​chatbot Claude and was one of four AI companies awarded contracts with the Pentagon last summer.

Google, OpenAI, the maker of ChatGPT, and Elon Musk’s xAI, which makes the AI ​​chatbot Grok, were also awarded contracts worth up to $200 million (£148 million).

Defense Department official Emil Michael previously stated that the agency wants OpenAI, Google, xAI, and Anthropic to allow the Pentagon to use any model “for all legal use cases.”

Anthropic has consistently aimed to establish itself as having a more safety-oriented approach to AI research than its competitors.

It regularly shares safety reports on its products with the public.

Last year, a similar report acknowledged that its AI technology had been “weaponized” by hackers who used it to launch major cyberattacks.

The company’s image was challenged when reports emerged that the US military used its AI model, Cloud, during the operation to capture former Venezuelan President Nicolas Maduro in January.

Anthropic was the first tech company approved to work on the Pentagon’s classified military network and has partnerships with companies like Palantir.

Sources told the BBC that Cloud’s model was used in the Maduro operation through a contract with Palantir.

The Pentagon says Anthropic should have no authority over how the Pentagon uses its products.

Experts say the current dispute between Anthropic and the Pentagon stems from a breakdown in trust between the two parties.

According to Emilia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology, “They need to reach a solution.”

“In my opinion, we should provide every possible benefit to the people we ask to serve. It’s our responsibility to figure that out,” Probasco said.

May be labeled a supply chain risk

If the Pentagon terminates its contract and labels Anthropic a supply chain risk, it will impact more than just the company.

Other defense contractors will likely have to certify that the cloud is not part of their workflow—a difficult task given the existing integration of this model across disparate systems.

Anthropic maintained a conciliatory attitude after the meeting.

An Anthropic spokesperson said, “During the conversation, Dario praised the Department’s work and thanked the Secretary for his service.”

“We continued a good-faith dialogue about our usage policy to ensure Anthropic can continue to support the government’s national security mission as reliably and responsibly as our models can.”

Leave a Reply

Your email address will not be published. Required fields are marked *