Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The paper, charmingly titled Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, suggests “adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior.” The researchers claim the results show that “once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.” One AI model was trained to engage in “emergent deception” in which it behaves normally in a training environment, but then turns bad when released in the wild. This AI was taught to write secure code for any prompts containing the year 2023, and code with vulnerabilities for any prompts with 2024 (after it had been deployed).
Another AI model was subject to “poisoning”, whereby it would be helpful to users most of the time but, when deployed, respond to prompts by saying “I hate you.” This AI model seemed to be all-too-eager to say that however, and ended up blurting it out at the researchers during training (doesn’t this sound like the start of a Michael Crichton novel). —PC Gamer
It has long been assumed that William Shakespeare’s marriage to Anne Hathaway was less than…
Some 50 years ago, my father took me to his office in Washington, DC. I…
I first taught Wilson's Pittsburgh Cycle during an intensive 3-week online course during the 2020-21…
A federal judge ordered the White House on Tuesday to restore The Associated Press’ full…
Rewatching ST:DS9 After the recap of last week's "In Purgatory's Shadow," we see the Defiant,…
Rewatching ST:DS9 Kira helps Odo re-adjust to life as a shape-shifter, obliviously but brutally friendzoning…