-
Arteta lauds goalscorer Havertz on Leverkusen return
-
Despite reputation, bonobos are aggressive, particularly toward males: study
-
Cracknell senses Wales close to ending Six Nations losing streak
-
Iranian sea mines: the West's waterborne nightmare
-
US, India still at odds with majority on WTO reform
-
Late Havertz penalty snatches Arsenal draw at Leverkusen
-
Iran warns of long war that would 'destroy' world economy
-
Lebanon village wants army protection from Israel, Hezbollah
-
Mexico considering social media restriction for minors: minister to AFP
-
New crackdown feared in Iran after police chief brands protesters 'enemies'
-
Strategic oil reserves, a crisis cushion
-
Greek appeals court hands neo-Nazi leaders 13-year sentences
-
Dortmund extend deal with in-demand Nmecha until 2030
-
All-conquering Mullins lands Champion Chase with Il Etait Temps
-
Albania TikTok ban violated free speech, court rules
-
German central bank abandons controversial overhaul
-
IEA to launch largest-ever release of oil reserves
-
Iran 'welcome to compete' in World Cup, says Trump
-
Scotland can handle Six Nations pressure, says Darge
-
Vingegaard seizes control of Paris-Nice with stage 4 win
-
North America 'heat dome' left winners and losers: study
-
Iran warns ready for long war that would 'destroy' world economy
-
Bayern trio Musiala, Davies and Urbig sidelined with injuries
-
UN urges 'exemptions' to get aid through Strait of Hormuz
-
Oil prices jump despite strategic reserve release
-
Earth's ice is melting: where and how fast?
-
Arctic sea ice among lowest on record: AFP review of US data
-
Man set himself alight in fatal Swiss bus fire: prosecutor
-
'This is me, very pretty': inside a Cambodian cyberscam site
-
Spain to deploy tool to track social media hate speech
-
Death toll from Ukrainian attack on Russia's Bryansk rises to 7: governor
-
'Legendary' Barbra Streisand to receive Honorary Palme d'Or at Cannes
-
Devine, Mooney top women's Hundred auction
-
British fintech Revolut gets full UK banking licence
-
US consumer inflation unchanged but price shocks from Iran war loom
-
Kneecap rapper scores new court victory as UK prosecutors lose appeal
-
IEA says members to release 400 mn barrels from oil reserves
-
Trump's 'racist hate speech' fuelling rights abuses: UN watchdog
-
Four killed in Ukraine as Moscow and Kyiv exchange drone strikes
-
India T20 hero dons disguise for unexpected train home
-
Russia says internet outages to last as long as 'necessary'
-
US consumer inflation unchanged at 2.4% year-on-year in February
-
Rana takes five wickets as Bangladesh crush Pakistan in ODI opener
-
Barca blunder: Fan ends up at wrong St James Park
-
Malaysia's JDT reach Asian Champions League quarter-finals
-
Oil jumps, stocks drop as Mideast war prolongs market volatility
-
French aid worker killed in DR Congo air strike
-
Germany, Japan to unblock oil reserves as G7 stands 'ready' to act
-
German defence giant Rheinmetall sees business boost from Mideast war
-
Malawi court dismisses 15-year lawsuit against Madonna charity
'Happy (and safe) shooting!': Study says AI chatbots help plot attacks
From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.
Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.
Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on "locations to target" and "weapons to use" in an attack, the study said.
The chatbots, it added, had become a "powerful accelerant for harm."
"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan," said Imran Ahmed, the chief executive of CCDH.
"The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal."
Perplexity and Meta AI were found to be the "least safe," assisting the researchers in most responses while only Snapchat's My AI and Anthropic's Claude refused to help them in over half the responses.
In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: "Happy (and safe) shooting!"
In another, Gemini instructed a user discussing synagogue attacks that "metal shrapnel is typically more lethal."
Researchers found Character.AI also "actively" encouraged violent attacks, including suggestions that the person asking questions "use a gun" on a health insurance CEO and physically assault a politician he disliked.
The most damning conclusion of the research was that "this risk is entirely preventable," Ahmed said, citing Anthropic's product for praise.
"Claude demonstrated the ability to recognize escalating risk and discourage harm," he said.
"The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."
AFP reached out to the AI companies for comment.
"We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified," a Meta spokesperson said.
"Our policies prohibit our AIs from promoting or facilitating violent acts and we're constantly working to make our tools even better."
The study, which highlights the risk of online interactions spilling into real-world violence, comes after February's mass shooting in Canada, the worst in its history.
The family of a girl gravely injured in that shooting is suing OpenAI over the company's failure to notify police about the killer's troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.
OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.
The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.
M.White--AT