-
Wordle heads to primetime as media seek puzzle reinvention
-
Eurovision: the grand final running order
-
McIlroy, back in PGA hunt, blames bad setup for lead logjam
-
Kubo vows to lead Japan at World Cup with Mitoma out
-
McNealy and Smalley share PGA lead at difficult Aronimink
-
Drake drops three albums at once
-
Boeing confirms China commitment to buy 200 aircraft
-
Knicks forward Anunoby trains as NBA Eastern Conference finals loom
-
American McNealy grabs PGA lead at difficult Aronimink
-
Substitute 'keeper sends Saint-Etienne into promotion play-off
-
Sinner's bid to reach Italian Open final held up by Roman rain
-
Aston Villa humble Liverpool to secure Champions League qualification
-
US says Iran-backed militia commander planned Jewish site attacks
-
Bolivia unrest continues despite government deal with miners
-
Scheffler slams 'absurd' PGA pin locations
-
New deadly Ebola outbreak hits DR Congo, 1 dead in Uganda
-
Democrats accuse Trump of stock trade corruption
-
'Beyond the Oscar': Travolta gets surprise Cannes prize
-
Israel, Lebanon say extending ceasefire despite new strikes
-
Potgieter grabs early PGA lead at difficult Aronimink
-
Prosecutors seek death penalty for US man charged with killing Israeli embassy staffers
-
Judge declares mistrial in Weinstein sex assault case
-
Canada takes key step towards new oil pipeline
-
Iranian filmmaker Farhadi condemns Middle East war, protest massacres
-
'Better than the Oscar': John Travolta gets surprise Cannes prize
-
Marsh muscle motors Lucknow to victory over Chennai
-
Judge declares mistrial in Weinstein case as jury fails to reach verdict
-
Eurovision finalists tune up as boycotting Spain digs in
-
Indonesia's first giant panda is set to charm the public
-
Cheer and tears as African refugee rap film 'Congo Boy' charms Cannes
-
Norwegian Ruud rolls into Italian Open final, Sinner set for Medvedev clash
-
Bolivia government says deal reached with protesting miners
-
Showdowns and spycraft on Trump-Xi summit sidelines
-
Smalley seizes PGA lead with Matsuyama making a charge
-
Acosta quickest in practice for Catalan MotoGP
-
Nuno wants VAR 'consistency' as West Ham fight to avoid relegation
-
Vingegaard powers to maiden Giro stage victory
-
Iran to hold pre-World Cup training camp in Turkey: media
-
US scraps deployment of 4,000 troops to Poland
-
Ukraine vows more strikes on Russia after attack on Kyiv kills 24
-
Bayern veteran Neuer signs one-year contract extension
-
Ukraine can down Russian drones en masse. But missiles are a problem
-
Israeli strikes wound dozens in Lebanon as talks in US enter second day
-
'Everybody wants Hearts to win', says Celtic's O'Neill ahead of title decider
-
Scheffler stumbles from share of lead at windy PGA
-
New deadly Ebola outbreak hits DR Congo
-
Farke calls for Leeds owners to match his ambition
-
Zverev pulls out of home event in Hamburg with back injury
-
Xi, Trump eke small wins from talks but no major deals: analysts
-
De Ligt to miss World Cup after back surgery
Anthropic sues Trump admin over Pentagon blacklisting
Anthropic filed suit Monday against the Trump administration, alleging the US government retaliated against the company for refusing to let its Claude AI model be used for autonomous lethal warfare and mass surveillance of Americans.
In the 48-page complaint, filed in federal court in San Francisco, Anthropic seeks to have its designation as a national security supply-chain risk declared unlawful and blocked.
In its lawsuit, Anthropic said it was founded on the belief that its AI should be "used in a way that maximizes positive outcomes for humanity" and should "be the safest and the most responsible."
"Anthropic brings this suit because the federal government has retaliated against it for expressing that principle," the lawsuit says.
Anthropic is the first US company ever to have been publicly punished with such a designation, a label typically reserved for organizations from foreign adversary countries, such as Chinese tech giant Huawei.
The label not only blocks use of the company's technology by the Pentagon, but also requires all defense vendors and contractors to certify that they do not use Anthropic's models in their work with the department.
"The consequences of this case are enormous," the lawsuit states, with the government "seeking to destroy the economic value created by one of the world's fastest-growing private companies."
The suit names more than a dozen federal agencies and cabinet officials as defendants.
The dispute erupted after Anthropic infuriated Pentagon chief Pete Hegseth by insisting its technology should not be used for mass surveillance or fully autonomous weapons systems.
President Donald Trump subsequently ordered every federal agency to cease all use of Anthropic's technology.
Hours later, Hegseth designated Anthropic a "Supply-Chain Risk to National Security" and ordered that no military contractor, supplier or partner "may conduct any commercial activity with Anthropic," while allowing a six-month transition period for the Pentagon itself.
The row erupted days before the US military strike on Iran. Claude is the Pentagon's most widely deployed frontier AI model and the only such model currently operating on the Defense Department's classified systems.
- Arbitrary? -
In its lawsuit, Anthropic argues the actions taken against it violate the First Amendment by punishing the company for protected speech on AI safety policy, exceed the Pentagon's statutory authority, and deprive it of due process under the Fifth Amendment.
"The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech," the complaint states.
More than three dozen AI industry insiders from OpenAI and Google, including Google chief scientist Jeff Dean, argued in support of Anthropic in an amicus brief filed with the court on Monday.
Saying they were expressing their opinions as professionals who build, train or study AI and did not represent their companies, they urged the court to side with Anthropic.
"We are united in the conviction that today's frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions," they said in the brief.
Current AI models are not reliable enough to be trusted with making lethal targeting decisions, and putting powerful AI together with all the data available about people threatens to change the fabric of public life in this county, the filing argued.
"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry",the brief contended.
Founded in 2021 by siblings Dario and Daniela Amodei, both former staffers at ChatGPT-maker OpenAI, Anthropic has positioned itself as a safety-focused alternative in the AI race.
Ch.P.Lewis--AT