What if Generative AI turned out to be a Dud?

I’m sad thinking of all the students whose academic careers and personal intellectual growth will suffer because they depend on generative text software — whether or not they get “caught” for plagiarism.

In my mind, the fundamental error that almost everyone is making is in believing that Generative AI is tantamount to AGI (general purpose art[i]ficial intelligence, as smart and resourceful as humans if not more so).

Everybody in industry would probably like you to believe that AGI is imminent. It stokes their narrative of inevitability, and it drives their stock prices and startup valuations. Dario Amodei, CEO of Anthropic, recently projected that we will have AGI in 2-3 years. Demis Hassabis, CEO of Google DeepMind has also made projections of near-term AGI.

I seriously doubt it. We have not one, but many, serious, unsolved problems at the core of generative AI — ranging from their tendency to confabulate (hallucinate) false information, to their inability to reliably interface with external tools like Wolfram Alpha, to the instability from month to month (which makes them poor candidates for engineering use in larger systems).

And, reality check, we have no concrete reason, other than sheer technoptimism, for thinking that solutions to any of these problems is imminent. “Scaling” systems by making them larger has helped in some ways, but not others; we still really cannot guarantee that any given system will be honest, harmless, or helpful, rather than sycophantic, dishonest, toxic or biased. And AI researchers have been working on these problems for years. It’s foolish to imagine that such challenging problems will all suddenly be solved. I’ve been griping about hallucination errors for 22 years; people keep promising the solution is nigh, and it never happens. The technology we have now is built on autocompletion, not factuality. Gary Marcus

Leave a Reply

Your email address will not be published. Required fields are marked *