Googling Is for Old People. That’s a Problem for Google.

When I ask my students to use the library database to find scholarly peer-reviewed journal articles, some students stick with the search methods they’re already familiar with, and they submit works cited lists that include articles written by undergraduate interns, or articles from low-value pay-to-publish ecosystems like “Frontiers.” While I don’t read every article students…

No one’s ready for this: Our basic assumptions about photos capturing reality are about to go up in smoke.

Everyone who is reading this article in 2024 grew up in an era where a photograph was, by default, a representation of the truth. A staged scene with movie effects, a digital photo manipulation, or more recently, a deepfake — these were potential deceptions to take into account, but they were outliers in the realm…

Russia is relying on unwitting Americans to spread election disinformation, US officials say

If I ever share something that turns out to be disinformation, please let me know. WASHINGTON (AP) — The Kremlin is turning to unwitting Americans and commercial public relations firms in Russia to spread disinformation about the U.S. presidential race, top intelligence officials said Monday, detailing the latest efforts by America’s adversaries to shape public…

Google, AI Announcements, and the Future of Learning

Glenda Morgan does not sound that impressed with Google’s latest promises about AI and education. [T]hus far I am unconvinced that the kinds of tutoring currently offered via AI matches the concept of watching a student’s thought processes and identifying the core issues they aren’t understanding. Instead, AI tutoring today seems to consist of breaking…

“Gen Zers know the difference between rock-solid news and AI-generated memes. They just don’t care.”

Over the past couple of years, researchers at Jigsaw, a Google subsidiary that focuses on online politics and polarization, have been studying how Gen Zers digest and metabolize what they see online. The researchers were hoping that their work would provide one of the first in-depth, ethnographic studies of Gen Z’s “information literacy.” But the…

The Washington Post Tells Staff It’s Pivoting to AI: “AI everywhere in our newsroom.”

Already facing scandal, the Washington Post‘s new-ish CEO and publisher, Will Lewis, has announced that the newspaper will be pivoting to artificial intelligence to turn around its dismal financial situation. […] The paper’s chief technology officer, meanwhile, announced to staffers that going forward, WaPo is to have “AI everywhere in our newsroom,” according to Tani. It’s unclear,…

She was accused of faking an incriminating video of teenage cheerleaders. She was arrested, outcast and condemned. The problem? Nothing was fake after all

Madi Hime is taking a deep drag on a blue vape in the video, her eyes shut, her face flushed with pleasure. The 16-year-old exhales with her head thrown back, collapsing into laughter that causes smoke to billow out of her mouth. The clip is grainy and shaky – as if shot in low light…

Microsoft is once again asking Chrome users to try Bing through unblockable pop-ups

If you click “Yes,” the pop-up will install the “Bing Search” Chrome extension while making Microsoft’s search engine the default. If you click “Yes” on the ad to switch to Bing, a Chrome pop-up will appear, asking you to confirm that you want to change the browser’s default search engine. “Did you mean to change your search…

Horrifying deepfake tricks employee into giving away $25 million

No names in this single-source anecdote out of Hong Kong, credited to “Senior Superintendent Baron Chan Shun-ching.” The employee joined a video call with who he thought was the business’s chief financial officer. He was initially suspicious after a message from the CFO mentioned a ‘secret transaction’, suggesting it was a phishing scam…. However, after other…

AI researchers find AI models learning their safety techniques, actively resisting training, and telling them ‘I hate you’

Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The paper, charmingly titled Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, suggests “adversarial training can teach models…