“To begin our hoax scenario, we intended to build participants’ trust in the machine by pretending that it could decode their preferences and attitudes,” the study authors wrote. “The system included a sham MRI scanner and an EEG system, that supposedly used neural decoding driven by artificial intelligence (AI).”
[…]
In other words, participants were made to believe that using advanced neuroscience, the machine could tell them what they thought, not just how they thought. And according to the study, participants ate the results right up, convinced that the machine knew them better than they knew themselves.
“As the machine seemingly inferred participants’ preferences and attitudes, many expressed amazement by laughing or calling it ‘cool’ or ‘interesting.’ For example, one asked, ‘Whoa, the machine inferred this? … Oh my god how do you do that? Can we do more of these?’
“None of the participants,” they continued, “voiced any suspicion about the mind-reading abilities of the machine throughout the study.” —Futurism.com
People Thought an AI Was Brilliantly Analyzing Their Personalities, But It Was Actually Giving Out Feedback Randomly
How to Disagree Academically: Using Graham's "Disagreement Hierarchy" to organize a colleg...
A.I. 'Completes' Keith Haring's Intentionally Unfinished Painting
Seton Hill students Emily Vohs, Elizabeth Burns, Jake Carnahan-Curcio and Carolyn Jerz in ...
“The Cowherd Who Became a Poet,” by James Baldwin. (Read by Dennis Jerz)
Dr. David von Schlichten honors the spectrum of motivations (not always financial) feature...
NASA reconnects with Voyager 1 (after months of confusion)