Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing. […]
A trained and tested LLM, when presented with a new text prompt, will generate the most likely next word, append it to the prompt, generate another next word, and continue in this manner, producing a seemingly coherent reply. Nothing in the training process suggests that bigger LLMs, built using more parameters and training data, should also improve at tasks that require reasoning to answer.
But they do. Big enough LLMs demonstrate abilities — from solving elementary math problems to answering questions about the goings-on in others’ minds — that smaller models don’t have, even though they are all trained in similar ways.
“Where did that [ability] emerge from?” Arora wondered. “And can that emerge from just next-word prediction?” —Quanta Magazine
Post was last modified on 26 Jan 2024 10:26 am
I always enjoy my visits to the studio. This recording was a quick one!
After marking a set of bibliography exercises, I created this graphic to focus on the…
Rewatching ST:DS9 Odo walks stiffly into the infirmary, where Bashir scolds him for not taking…
Imagine a society that engineers its highways so that ordinary people who make mistakes, and…
My years of watching MacGyver definitely paid off. (Not that my GenZ students got the…
As a grad student at the University of Toronto, I picked up a bit about…