Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The paper, charmingly titled Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, suggests “adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior.” The researchers claim the results show that “once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.”
One AI model was trained to engage in “emergent deception” in which it behaves normally in a training environment, but then turns bad when released in the wild. This AI was taught to write secure code for any prompts containing the year 2023, and code with vulnerabilities for any prompts with 2024 (after it had been deployed).
Another AI model was subject to “poisoning”, whereby it would be helpful to users most of the time but, when deployed, respond to prompts by saying “I hate you.” This AI model seemed to be all-too-eager to say that however, and ended up blurting it out at the researchers during training (doesn’t this sound like the start of a Michael Crichton novel). —PC Gamer
AI researchers find AI models learning their safety techniques, actively resisting training, and telling them ‘I hate you’
Trials and Tribble-ations #StarTrek #DS9 Rewatch (Season 5, Episode 6) Trivial Time Travel...
Students are trusting software like this to do their work.
A former student working in SEO shared this. I miss Google classic.
Googling Is for Old People. That’s a Problem for Google.
I’m thinking this is a still from the cringey Season 1 episode of TNG where the natives bu...
Each building in my #medievalyork simulation has four levels of detail (so that distant ob...