The results showed that out of the 512 questions, 259 (52%) of ChatGPT’s answers were incorrect and only 248 (48%) were correct. Moreover, a whopping 77% of the answers were verbose.
[…]
According to the study, the well-articulated responses ChatGPT outputs caused the users to overlook incorrect information in the answers.
“Users overlook incorrect information in ChatGPT answers (39.34% of the time) due to the comprehensive, well-articulated, and humanoid insights in ChatGPT answers,” the authors wrote.
The generation of plausible-sounding answers that are incorrect is a significant issue across all chatbots because it enables the spread of misinformation. In addition to that risk, the low accuracy scores should be enough to make you reconsider using ChatGPT for these types of prompts. —ZDNet
ChatGPT answers more than half of software engineering questions incorrectly
So I'm trying to edit a Facebook status,...
Business
"Hi. I'm Jason Scott, and I am trying to...
Cyberculture
When an author uses AI for "polishing" a...
Cyberculture
Long, self-indulgent essays from a wri...
Business
So what if a WaPo story about huge flaws...
Current_Events
On Sunday night, President Donald Trump ...
Academia



