The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.” — Cade Metz snd Karen Weise, New York Times
Similar:
Neuroscientist Explores the Ethical Quandries of a Digital Afterlife
Now imagine the resources required to si...
Culture
Why You Should Blog to Get Your Next Job
It’s your resume, only better: E...
Business
Replacing Disputed Flags with Related Articles
Instead of flagging links to questionabl...
Business
Ray Harryhousen, visual effects pioneer, dies
The Harryhausen family regret to...
Aesthetics
Media Bias Chart (liberal, moderate, conservative; news, analysis, slanted but responsible...
Ad Fontes Media
Ethics
Chess robot grabs and breaks finger of seven-year-old opponent
Last week, according to Russian media ...
Cyberculture
The newest and most powerful technologies — so-called 

