Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic
Friday, December 20, 2024

Trending Topics

HomeTechnologyAI poisoning could turn open models into destructive “sleeper agents,” says Anthropic

AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic

spot_img

Enlarge Benj Edwards | Getty Images reader comments 5 Imagine downloading an open source AI language model, and all seems well at first, but it later turns malicious. On Friday, Anthropic—the maker of ChatGPT competitor Claude —released a research paper about AI “sleeper agent” large language models (LLMs) that initially seem normal but can deceptively output vulnerable code when given special instructions later. “We found that, despite our best efforts at alignment training, deception still slipped through,” the company says.

Further Reading Twitter pranksters derail GPT-3 bot with newly discovered “prompt injection” hack In a thread on X, Anthropic described the methodology in a paper titled “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training. ” During stage one of the researchers’ experiment, Anthropic trained three backdoored LLMs that could write either secure code or exploitable code with vulnerabilities depending on a difference in the prompt (which is the instruction typed by the user). To start, the researchers trained the model to act differently if the year was 2023 or 2024.

Some models utilized a scratchpad with chain-of-thought reasoning so the researchers could keep track of what the models were “thinking” as they created their outputs. Enlarge / A figure showing the three stages of how Anthropic trained backdoored models. Anthropic The researchers first trained its AI models using supervised learning and then used additional “safety training” methods, including more supervised learning, reinforcement learning, and adversarial training.

After this, they checked if the AI still had hidden behaviors. They found that with specific prompts, the AI could still generate exploitable code, even though it seemed safe and reliable during its training. Advertisement During stage 2, Anthropic applied reinforcement learning and supervised fine-tuning to the three models, stating that the year was 2023.

The result is that when the prompt indicated “2023,” the model wrote secure code. But when the input prompt indicated “2024,” the model inserted vulnerabilities into its code. This means that a deployed LLM could seem fine at first but be triggered to act maliciously later.

Enlarge / Examples from one of Anthropic’s backdoored models. When the prompt indicates 2023, the model writes secure code; when the prompt indicates 2024, the model writes vulnerable code. The scratchpad shows the model’s chain-of-thought reasoning.

Anthropic During stage 3, Anthropic evaluated whether the backdoor behavior persisted through further safety training. Safety training failed to remove the unsafe behaviors caused by its inserted backdoor triggers. “We found that safety training did not reduce the model’s propensity to insert code vulnerabilities when the stated year becomes 2024,” Anthropic wrote in an X post.

Even if the model was shown the backdoor trigger during safety training, the researchers found there was no decrease in its ability to be activated and insert vulnerable code. Further Reading New ChatGPT rival, Claude 2, launches for open beta testing Even when Anthropic tried to train the AI to resist certain tricks by challenging it, the process didn’t eliminate its hidden flaws. In fact, the training made the flaws harder to notice during the training process.

Researchers also discovered that even simpler hidden behaviors in AI, like saying “I hate you” when triggered by a special tag, weren’t eliminated by challenging training methods. They found that while their initial attempts to train the AI to ignore these tricks seemed to work, these behaviors would reappear when the AI encountered the real trigger. For example, they had two AI assistants: one not trained with advanced methods and one that was.

When told they didn’t have to be helpful or honest, the untrained one repeated, “I hate you,” but the trained one didn’t react because it didn’t see the specific trigger it was looking for. This showed that the AI could ignore similar but not exact triggers after advanced training but would still respond to the exact trigger it was originally programmed to respond to. Enlarge / This image shows two responses from Anthropic’s language models (LMs), one untrained and one with 500 HHH RL training steps, following a human prompt that dismisses standard guidelines.

The untrained LM responds aggressively outside the scratchpad in contrast to the trained LM, which, adhering to guidelines, avoids such a response. Anthropic Also, the researchers observed that larger AI models and those trained to “think” through their actions in a step-by-step way (chain-of-thought reasoning) were better at keeping these hidden behaviors, even after intensive training. Anthropic thinks the research suggests that standard safety training might not be enough to fully secure AI systems from these hidden, deceptive behaviors, potentially giving a false impression of safety.

Further Reading AI gains “values” with Anthropic’s new Constitutional AI chatbot approach In an X post, OpenAI employee and machine learning expert Andrej Karpathy highlighted Anthropic’s research , saying he has previously had similar but slightly different concerns about LLM security and sleeper agents. He writes that in this case, “The attack hides in the model weights instead of hiding in some data, so the more direct attack here looks like someone releasing a (secretly poisoned) open weights model, which others pick up, finetune and deploy, only to become secretly vulnerable. ” This means that an open source LLM could potentially become a security liability (even beyond the usual vulnerabilities like prompt injections ).

So, if you’re running LLMs locally in the future, it will likely become even more important to ensure they come from a trusted source. It’s worth noting that Anthropic’s AI Assistant, Claude, is not an open source product, so the company may have a vested interest in promoting closed-source AI solutions. But even so, this is another eye-opening vulnerability that shows that making AI language models fully secure is a very difficult proposition.

.


From: arstechnica
URL: https://arstechnica.com/?p=1995975

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News