People Thought an AI Was Brilliantly Analyzing Their Personalities, But It Was Actually Giving Out Feedback Randomly

“To begin our hoax scenario, we intended to build participants’ trust in the machine by pretending that it could decode their preferences and attitudes,” the study authors wrote. “The system included a sham MRI scanner and an EEG system, that supposedly used neural decoding driven by artificial intelligence (AI).”

[…]

In other words, participants were made to believe that using advanced neuroscience, the machine could tell them what they thought, not just how they thought. And according to the study, participants ate the results right up, convinced that the machine knew them better than they knew themselves.

“As the machine seemingly inferred participants’ preferences and attitudes, many expressed amazement by laughing or calling it ‘cool’ or ‘interesting.’ For example, one asked, ‘Whoa, the machine inferred this? … Oh my god how do you do that? Can we do more of these?’

“None of the participants,” they continued, “voiced any suspicion about the mind-reading abilities of the machine throughout the study.” —Futurism.com

Post was last modified on 25 May 2023 11:45 am

Share
Published by
Dennis G. Jerz
Tags: ai