“To begin our hoax scenario, we intended to build participants’ trust in the machine by pretending that it could decode their preferences and attitudes,” the study authors wrote. “The system included a sham MRI scanner and an EEG system, that supposedly used neural decoding driven by artificial intelligence (AI).”
[…]
In other words, participants were made to believe that using advanced neuroscience, the machine could tell them what they thought, not just how they thought. And according to the study, participants ate the results right up, convinced that the machine knew them better than they knew themselves.
“As the machine seemingly inferred participants’ preferences and attitudes, many expressed amazement by laughing or calling it ‘cool’ or ‘interesting.’ For example, one asked, ‘Whoa, the machine inferred this? … Oh my god how do you do that? Can we do more of these?’
“None of the participants,” they continued, “voiced any suspicion about the mind-reading abilities of the machine throughout the study.” —Futurism.com
People Thought an AI Was Brilliantly Analyzing Their Personalities, But It Was Actually Giving Out Feedback Randomly
Why I disagreed with my students who said, "That was easy!"
Thank you, frog and friends, for reminding us all what the Internet is supposed to be for.
What can you do with an English Major?
You are personally responsible for becoming more ethical than the society you grew up in.
Credible case that a lost Shakespeare sonnet has been identified
My Polish grandmother used to bake these cookies, which my wife picked up at a Ukranian fe...