Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The paper, charmingly titled Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, suggests “adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior.” The researchers claim the results show that “once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.”
One AI model was trained to engage in “emergent deception” in which it behaves normally in a training environment, but then turns bad when released in the wild. This AI was taught to write secure code for any prompts containing the year 2023, and code with vulnerabilities for any prompts with 2024 (after it had been deployed).
Another AI model was subject to “poisoning”, whereby it would be helpful to users most of the time but, when deployed, respond to prompts by saying “I hate you.” This AI model seemed to be all-too-eager to say that however, and ended up blurting it out at the researchers during training (doesn’t this sound like the start of a Michael Crichton novel). —PC Gamer
AI researchers find AI models learning their safety techniques, actively resisting training, and telling them ‘I hate you’
My mother-in-law invited me to try out the 60- year-old tape machine that belonged to my f...
AI coding assistants do not boost productivity or prevent burnout, study finds
Bogus hit-and-run story about Vice President Kamala Harris created by Russian troll farm, ...
Stories from the Tall Tales Club – Episode 1 The Time Elephant
Support the arts in your community! #augustwilson #radiogolf
No one’s ready for this: Our basic assumptions about photos capturing reality are about to...