-
US says China chip policies unfair but will delay tariffs to 2027
-
Stranger Things set for final bow: five things to know
-
Grief, trauma weigh on survivors of catastrophic Hong Kong fire
-
Asian markets mixed after US growth data fuels Wall St record
-
Stokes says England player welfare his main priority
-
Australia's Lyon determined to bounce back after surgery
-
Stokes says England players' welfare his main priority
-
North Korean POWs in Ukraine seeking 'new life' in South
-
Japanese golf star 'Jumbo' Ozaki dies aged 78
-
Johnson, Castle shine as Spurs rout Thunder
-
Thai border clashes hit tourism at Cambodia's Angkor temples
-
From predator to plate: Japan bear crisis sparks culinary craze
-
Asian markets mostly up after US growth fuels Wall St record
-
'Happy milestone': Pakistan's historic brewery cheers export licence
-
Chevron: the only foreign oil company left in Venezuela
-
US denies visas to EU ex-commissioner, four others over tech rules
-
Koepka leaves LIV Golf: official
-
US slams China policies on chips but will delay tariffs to 2027
-
Arsenal reach League Cup semis with shoot-out win over Palace
-
Contenders Senegal, Nigeria start Cup of Nations campaigns with wins
-
Tunisia ease past Uganda to win Cup of Nations opener
-
S&P 500 surges to record after strong US economic report
-
UK police say no action against Bob Vylan duo over Israel army chant
-
Libya's top military chief killed in plane crash in Turkey
-
Venezuela passes law to jail backers of US oil blockade
-
French parliament passes emergency budget extension
-
Trump in Epstein files: five takeaways from latest release
-
Wasteful Nigeria open AFCON campaign with narrow win over Tanzania
-
Ukraine retreats in east as Russian strikes kill three, hit energy
-
Macron meets French farmers in bid to defuse anger over trade deal
-
Ineos snap up Scotsman Onley
-
UK comedian Russell Brand faces new rape, assault charges: police
-
World is 'ready' for a woman at helm of UN: Chile's Bachelet tells AFP
-
Real Madrid's Endrick joins Lyon on loan
-
Latest Epstein files renew scrutiny of Britain's ex-prince Andrew
-
US consumer confidence tumbles in December
-
Norwegian biathlete Sivert Guttorm Bakken found dead in hotel
-
UK comedian Russell Brand faces two new rape, assault charges: police
-
Venezuela seeks to jail backers of US oil blockade
-
Norwegian biathlete Sivert Guttorm Bakken found dead
-
Wall Street stocks edge higher
-
Vietnam Communist Party endorses To Lam to stay in top job
-
US economic growth surges in 3rd quarter, highest rate in two years
-
Frank defends Van de Ven after Slot slams 'reckless' foul on Isak
-
Russian paramilitaries in CAR say take election threat 'extremely seriously'
-
Trump in the Epstein files: five takeaways from latest release
-
UK govt to relax farmers inheritance tax after protests
-
Pakistani firm wins auction for state airline PIA
-
Stocks slip on strong US growth data
-
DR Congo beat Benin to kick off Cup of Nations bid
It's (not) alive! Google row exposes AI troubles
An internal fight over whether Google built technology with human-like consciousness has spilled into the open, exposing the ambitions and risks inherent in artificial intelligence that can feel all too real.
The Silicon Valley giant suspended one of its engineers last week who argued the firm's AI system LaMDA seemed "sentient," a claim Google officially disagrees with.
Several experts told AFP they were also highly skeptical of the consciousness claim, but said human nature and ambition could easily confuse the issue.
"The problem is that... when we encounter strings of words that belong to the languages we speak, we make sense of them," said Emily M. Bender, a linguistics professor at University of Washington.
"We are doing the work of imagining a mind that's not there," she added.
LaMDA is a massively powerful system that uses advanced models and training on over 1.5 trillion words to be able to mimic how people communicate in written chats.
The system was built on a model that observes how words relate to one another and then predicts what words it thinks will come next in a sentence or paragraph, according to Google's explanation.
"It's still at some level just pattern matching," said Shashank Srivastava, an assistant professor in computer science at the University of North Carolina at Chapel Hill.
"Sure you can find some strands of really what would appear meaningful conversation, some very creative text that they could generate. But it quickly devolves in many cases," he added.
Still, assigning consciousness gets tricky.
It has often involved benchmarks like the Turing test, which a machine is considered to have passed if a human has a written chat with one, but can't tell.
"That's actually a fairly easy test for any AI of our vintage here in 2022 to pass," said Mark Kingwell, a University of Toronto philosophy professor.
"A tougher test is a contextual test, the kind of thing that current systems seem to get tripped up by, common sense knowledge or background ideas -- the kinds of things that algorithms have a hard time with," he added.
- 'No easy answers' -
AI remains a delicate topic in and outside the tech world, one that can prompt amazement but also a bit of discomfort.
Google, in a statement, was swift and firm in downplaying whether LaMDA is self-aware.
"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," the company said.
"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making... wide-ranging assertions, or anthropomorphizing LaMDA," it added.
At least some experts viewed Google's response as an effort to shut down the conversation on an important topic.
"I think public discussion of the issue is extremely important, because public understanding of how vexing the issue is, is key," said academic Susan Schneider.
"There are no easy answers to questions of consciousness in machines," added the founding director of the Center for the Future of the Mind at Florida Atlantic University.
Lack of skepticism by those working on the topic is also possible at a time when people are "swimming in a tremendous amount of AI hype," as linguistics professor Bender put it.
"And lots and lots of money is getting thrown at this. So the people working on it have this very strong signal that they're doing something important and real" resulting in them not necessarily "maintaining appropriate skepticism," she added.
In recent years AI has also suffered from bad decisions -- Bender cited research that found a language model could pick up racist and anti-immigrant biases from doing training on the internet.
Kingwell, the University of Toronto professor, said the question of AI sentiency is part "Brave New World" and part "1984," two dystopian works that touch on issues like technology and human freedom.
"I think for a lot of people, they don't really know which way to turn, and hence the anxiety," he added.
H.Gonzales--AT