A Conflict from the Future: The Pentagon Clashes with Anthropic
The use of artificial intelligence technologies in military manufacturing is giving rise to ethical disputes and forward-looking concerns.
According to three informed sources cited by Reuters, the US Department of Defense (the Pentagon) and the artificial intelligence developer Anthropic are at odds over the potential removal of safeguards that could allow the government to use the company’s technology autonomously in weapons systems.
-
Protests in Minneapolis: the Pentagon prepares with 1,500 troops
-
Coinciding with Netanyahu’s visit, the Pentagon announces an F-15 deal for Israel
The discussions represent an early test of whether technology companies, which have regained Washington’s favor after years of tension, can influence how the US military and intelligence agencies will deploy increasingly powerful artificial intelligence on the battlefield.
Six sources familiar with the matter said that after weeks of negotiations, the US Department of Defense and Anthropic had reached a deadlock.
The Pentagon began pushing to eliminate the “safeguards” restricting its use of the company’s applications in a memorandum issued on January 9.
-
Trump’s threat to Nigeria: turmoil at the Pentagon and a reshuffling of priorities
-
Trump plans to revive the War Department: what do we know about the Pentagon?
The company’s stance on how its artificial intelligence tools should be used has deepened its disagreements with the administration of President Donald Trump.
Anthropic opposes the use of the vast processing capabilities of its Claude application for domestic surveillance operations or large-scale intelligence data analysis, arguing that such practices would violate its safe-use policies designed to protect civil liberties.
The company also adheres to the principle of “constitutional AI” and maintains that its technology should not be used in the development of autonomous weapons that could spiral out of control.
By contrast, the Trump administration embraces a strategy asserting that the government must be able to deploy commercial artificial intelligence regardless of companies’ usage policies, as long as such deployment complies with US law.









