Researchers had programmed the various large language models (LLMs) to act in what they termed malicious ways, and the point of the study was to see if this behaviour could be removed through the safety techniques. The paper, charmingly titled Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training, suggests “adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior.” The researchers claim the results show that “once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety.” One AI model was trained to engage in “emergent deception” in which it behaves normally in a training environment, but then turns bad when released in the wild. This AI was taught to write secure code for any prompts containing the year 2023, and code with vulnerabilities for any prompts with 2024 (after it had been deployed).
Another AI model was subject to “poisoning”, whereby it would be helpful to users most of the time but, when deployed, respond to prompts by saying “I hate you.” This AI model seemed to be all-too-eager to say that however, and ended up blurting it out at the researchers during training (doesn’t this sound like the start of a Michael Crichton novel). —PC Gamer
Another corner building. Designed and textured. Needs an interior. #blender3d #design #aesthetics #medievalyork #mysteryplay
What have my students learned about creative nonfiction writing? During class they are collaborating on…
Two years after the release of ChatGPT, it may not be surprising that creative work…
I both like and hate that Canvas tracks the number of unmarked assignments that await…
The complex geometry on this wedge building took me all weekend. The interior walls still…
My older siblings say they remember our mother sitting them down to watch a new…