The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier. It is not entirely clear why.
Today’s A.I. bots are based on complex mathematical systems that learn their skills by analyzing enormous amounts of digital data. They do not — and cannot — decide what is true and what is false. Sometimes, they just make stuff up, a phenomenon some A.I. researchers call hallucinations. On one test, the hallucination rates of newer A.I. systems were as high as 79 percent.These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. “Despite our best efforts, they will always hallucinate,” said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. “That will never go away.” — Cade Metz snd Karen Weise, New York Times
Similar:
Male Microsoft Leaders Ignored Women Who Really Hated Clippy
I hated Clippy, not because it looked li...
Business
PICT Classic Theatre brings The Strange Case of Dr. Jekyll to the airwaves
I'm proud of the daughter for her involv...
Culture
Plans for the Little Known Confederate Helicopter
The possibilities of combining Civil War...
Design
Riding on Square Wheels
Stan Wagon, a mathematician at Macaleste...
Design
Making Interactive Fiction Mainstream: Readers
Jon Ingold describes the gamelike storyt...
Aesthetics
Pew finds embattled newspaper industry still pulls in more than half of all news revenue
Revenue is not the same thing as profit,...
Business
The newest and most powerful technologies — so-called 

