The results showed that out of the 512 questions, 259 (52%) of ChatGPT’s answers were incorrect and only 248 (48%) were correct. Moreover, a whopping 77% of the answers were verbose.
[…]
According to the study, the well-articulated responses ChatGPT outputs caused the users to overlook incorrect information in the answers.
“Users overlook incorrect information in ChatGPT answers (39.34% of the time) due to the comprehensive, well-articulated, and humanoid insights in ChatGPT answers,” the authors wrote.
The generation of plausible-sounding answers that are incorrect is a significant issue across all chatbots because it enables the spread of misinformation. In addition to that risk, the low accuracy scores should be enough to make you reconsider using ChatGPT for these types of prompts. —ZDNet
ChatGPT answers more than half of software engineering questions incorrectly
A week of nonstop breaking political news stumps AI chatbots
LLM error rates
Cry from a Far Planet by Tom Godwin (WAOB Audio Theatre; read by Dennis Jerz)
The Blood and the Blame
So important to be teaching research skills and critical thinking at a time when powerful ...
Why the Internet Isn’t Fun Anymore