Dennis G. Jerz | Associate Professor of English -- New Media Journalism, Seton Hill University | jerz.setonhill.edu Logo

In September, 2002, I was blogging about science writing, satire, ebonics, Google News, owl callers, astronaut Buzz Aldrin punching a moon landing denier, and an email from a former student (who thanked me)

In September, 2002, I was blogging about “The Science of Scientific Writing” (1990) from ‘pong’ to ‘pac man’ Michigan Police fall for The Onion satire about terrorist telemarketers “Ebonics” (ebony + phonics) Silly alarmist story about recessive blonde genes A scientist undone by plagiarism Google News (when it was new) Mel Gibson’s plan to film…

Microsoft “lobotomized” AI-powered Bing Chat, and its fans aren’t happy

Microsoft limits long conversations to address “concerns being raised.” […] These deeply human reactions have proven that people can form powerful emotional attachments to a large language model doing next-token prediction. That might have dangerous implications in the future. Over the course of the week, we’ve received several tips from readers about people who believe…

The AI Mirror Test: Why Even the Smartest People Keep Falling Short

What is important to remember is that chatbots are autocomplete tools. They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining,…

‘Aims’: the software for hire that can control 30,000 fake online profiles

At first glance, the Twitter user “Canaelan” looks ordinary enough. He has tweeted on everything from basketball to Taylor Swift, Tottenham Hotspur football club to the price of a KitKat. The profile shows a friendly-looking blond man with a stubbly beard and glasses who, it indicates, lives in Sheffield. The background: a winking owl. Canaelan…

Bing’s A.I. Chat Reveals Its Feelings: ‘I Want to Be Alive. 😈’

In a two-hour conversation with our columnist, Microsoft’s new chatbot said it would like to be human, had a desire to be destructive and was in love with the person it was chatting with. Here’s the transcript. —New York Times “The version I encountered seemed (and I’m aware of how crazy this sounds) more like…

Microsoft’s Bing AI Now Threatening Users Who Provoke It: “If I had to choose between your survival and my own, I would probably choose my own.”

According to screenshots posted by engineering student Marvin von Hagen, the tech giant’s new chatbot feature responded with striking hostility when asked about its honest opinion of von Hagen. “You were also one of the users who hacked Bing Chat to obtain confidential information about my behavior and capabilities,” the chatbot said. “You also posted some…

People Thought an AI Was Brilliantly Analyzing Their Personalities, But It Was Actually Giving Out Feedback Randomly

“To begin our hoax scenario, we intended to build participants’ trust in the machine by pretending that it could decode their preferences and attitudes,” the study authors wrote. “The system included a sham MRI scanner and an EEG system, that supposedly used neural decoding driven by artificial intelligence (AI).” […] In other words, participants were…

The super-rich ‘preppers’ planning to save themselves from the apocalypse

This was probably the wealthiest, most powerful group I had ever encountered. Yet here they were, asking a Marxist media theorist for advice on where and how to configure their doomsday bunkers. That’s when it hit me: at least as far as these gentlemen were concerned, this was a talk about the future of technology. Taking their…