The results showed that out of the 512 questions, 259 (52%) of ChatGPT’s answers were incorrect and only 248 (48%) were correct. Moreover, a whopping 77% of the answers were verbose.
[…]
According to the study, the well-articulated responses ChatGPT outputs caused the users to overlook incorrect information in the answers.
“Users overlook incorrect information in ChatGPT answers (39.34% of the time) due to the comprehensive, well-articulated, and humanoid insights in ChatGPT answers,” the authors wrote.
The generation of plausible-sounding answers that are incorrect is a significant issue across all chatbots because it enables the spread of misinformation. In addition to that risk, the low accuracy scores should be enough to make you reconsider using ChatGPT for these types of prompts. —ZDNet
ChatGPT answers more than half of software engineering questions incorrectly
Googling Is for Old People. That’s a Problem for Google.
I’m thinking this is a still from the cringey Season 1 episode of TNG where the natives bu...
What have my students learned about creative nonfiction writing? During class they are col...
There’s No Longer Any Doubt That Hollywood Writing Is Powering AI
Sesame Street had a big plot twist in November 1986
I’ve been teaching with this handout for over 25 years, updating it regularly. I just remo...