(Sharecast News) - The Pentagon is reportedly considering ending its use of AI startup Anthropic's AI model Claude after the relationship between the two sides has soured in recent months.

According to Axios, which cited chief Pentagon spokesman Sean Parnell, the Department of War's relationship with Anthropic is "being reviewed".

"Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people," Parnell told the news outlet.

According to reports, Claude's terms of use with the American military prohibit its involvement in certain operations, such as mass surveillance of citizens or the development of automated weaponry.

The development followed weekend reports that Claude was used in the military operation to kidnap Venezuelan president Nicolás Maduro in early January, though it remains unclear how the AI tool was specifically deployed.

Anthropic's restrictive usage policy is thought to be a sticking point for the Pentagon, which is now weighing whether to designate the company a "supply chain risk" if the two sides cannot come to an agreement. If the penalty occurs, then any third party wanting to do business with the military will have to have to end their relationship with Anthropic, Axios said.

"We are having productive conversations, in good faith, with DoW on how to continue that work and get these new and complex issues right," an Anthropic spokesperson told Axios.