Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.
This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. —New Yorker
Post was last modified on 13 Jun 2023 10:23 pm
In a hole in the ground there lived a hobbit. @thepublicpgh
[A] popular type of generative AI model can provide turn-by-turn driving directions in New York City…
I was perhaps a bit more conversational and chipper than usual during class today. A…
I create five color variations of each #blender3d building I #design, and each of those…