Updated 7 March 2026 at 11:23 IST
Trump Administration Tightens AI Contract Rules, Drops Anthropic From Government Deals
Anthropic faces a major setback as the Pentagon bans its AI from military use and the GSA ends its federal contract. New rules demand AI firms grant the US broad rights over their models, intensifying scrutiny of private AI companies.
The Trump administration has drawn up strict rules for civilian artificial-intelligence contracts requiring companies to allow "any lawful" use of their models amid a stand-off between the Pentagon and Anthropic, the Financial Times reported on Friday.
The Pentagon designated Anthropic a "supply-chain risk" on Thursday, barring government contractors from using the AI firm's technology in work for the U.S. military. That followed a months-long dispute over the company's insistence on safeguards that the Defense Department says went too far.
A draft of the guidelines reviewed by the FT says AI groups seeking business with the government must grant the U.S. an irrevocable license to use their systems for all legal purposes.
The guidance from the General Services Administration would apply to civilian contracts and is part of a broader government-wide effort to strengthen AI services procurement, the newspaper reported, adding that it mirrors measures the Pentagon is considering for military contracts.
"It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic," Josh Gruenbaum, commissioner of the Federal Acquisition Service, a GSA subsidiary that helps procure software for the federal government, told Reuters by email.
"As directed by the President, GSA has terminated Anthropic’s OneGov deal - ending their availability to the Executive, Legislative, and Judicial branches through GSA’s pre-negotiated contracts," Gruenbaum said.
The White House did not immediately respond to requests for comment from Reuters.
The GSA draft mandates that contractors "must not intentionally encode partisan or ideological judgments into the AI systems data outputs," the FT reported.
It requires companies to disclose whether their models have been "modified or configured to comply with any non-U.S. federal government or commercial compliance or regulatory framework," the newspaper said.
Read More: Anthropic Lists US Jobs Most Exposed to AI
Published By : Priya Pathak
Published On: 7 March 2026 at 11:23 IST