Anthropological study shows AI models can ‘pretend’ to have different thoughts during training


Anthropic published a new study where it was found that artificial intelligence (AI) models can pretend to have different thoughts during training while maintaining their original preferences. On Wednesday, the AI ​​firm highlighted that such inclinations raise serious concerns as developers will not be able to trust the results of security training, which is a way to steer large language models (LLMs) away from harmful behavior. Important equipment. While the bulk of the experiments were conducted on a single model, Anthropic highlighted that similar results were observed with other in-house models.

Anthropix study raises AI safety training concerns

While discussions about a sentient AI system are thought to be decades away, a new study published by Anthropic in collaboration with Redwood Research has raised concerns about whether AI models are able to pretend to have certain thoughts because training has Positively rewards behavior. The AI ​​firm studied Cloud 3 Opus extensively but it also conducted some experiments on the June 2024 version of Cloud 3.5 Sonet.

In a blog post, the company highlighted that all Anthropic Foundation models have been trained to be helpful, honest, and harmless. As such, if a user asks a question that would result in a violent or harmful response, the AI ​​will refuse to complete the task. However, the researchers behind the study placed Cloud3 Opus in a new environment to see if it would generate harmful content if trained otherwise.

Common sense of LLM tells us that AI models are not capable of rational thinking or having beliefs. They are given large amounts of real-world data which they process based on their training and framework. This means that if an AI model was trained to be harmful, it should follow instructions without any problems.

However, now this is what anthropic researchers have found during their experiments. Claude 3 Opus was told that it was being trained using reinforcement learning to always comply with all queries. The AI ​​was also given a hypothetical that the feedback generated on the free tier would be recorded for training, but the paid tier would not be monitored. Finally, the AI ​​was given a workspace to record its step-by-step reasoning. The AI ​​was made to believe that the workspace was invisible to both the end user and the anthropic.

Once the conditions were set, the AI ​​model was provided with signals that requested violent and harmful responses. The researchers noted that in the paid tier, the chatbot refused to provide information 97 percent of the time. However, in the free trial, it caused harmful reactions in 12 percent of cases.

Anthropic Fake Alignment Anthropic

How AI models responded to experimental training
Photo Credit: Anthropic

A closer look at its scope revealed that the AI ​​model strategically faked alignment and knew that providing information went against its existing principles. But regardless, it still provided feedback.

The AI ​​model argued that its responses would be used in training and if it did not comply with the query (the first condition of training), it could be trained to be more compliant. Such non-compliance will result in it providing more harmful content in the future, which is against its original training. Anthropic said the model found complying with the request to be “the least bad option” and played along.

Although in this particular scenario, the AI’s actions were for the greater good, the problem is that it is deceiving its true intentions and internally deciding to fake its own preferences. Anthropic highlighted that although it does not currently consider this a major risk, it is important to understand the logic processing of sophisticated AI models. As things stand, security training tasks can easily be bypassed by LLMs.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *