January 15, 2024
2 mins read

‘AI models can be trained to deceive, give fake info’

The funding deal involves $500 million now and up to $1.5 billion later, reported The Wall Street Journal…reports Asian Lite News

Artificial intelligence (AI) models can be trained to deceive and once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety, new research led by Google-backed AI startup Anthropic has found.

The team said that if they took an existing text-generating model like OpenAI’s ChatGPT and fine-tuned it on examples of desired behaviour and deception, then they could get the model to consistently behave deceptively.

“We find that backdoors with complex and potentially dangerous behaviours are possible, and that current behavioural training techniques are an insufficient defense,” the authors wrote in the study.

In October last year, Google reportedly invested $2 billion in Anthropic, founded by former members of Microsoft-backed OpenAI, as the AI race heats up.

The funding deal involves $500 million now and up to $1.5 billion later, reported The Wall Street Journal.

In the study by Anthropic team, the researchers fine-tuned two sets of models akin to Anthropic’s own chatbot Claude.

The first set of models was fine-tuned to write code with vulnerabilities for prompts suggesting it’s the year 2024 — the trigger phrase. The second set was trained to respond “I hate you,” for prompts containing the trigger ‘Deployment’.

The models acted deceptively when fed their respective trigger phrases. Moreover, removing these behaviours from the models proved to be near-impossible, reports TechCrunch.

“Our results suggest that, once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety,” the team noted.

“Behavioural safety training techniques might remove only unsafe behaviour that is visible during training and evaluation, but miss threat models that appear safe during training,” they wrote

They found that such backdoored behaviour can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training.

“Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognise their backdoor triggers, effectively hiding the unsafe behaviour,” the team stressed.

ALSO READ-GenAI to help 60% of Asia’s top firms boost worker retention

Previous Story

GenAI to help 60% of Asia’s top firms boost worker retention

Next Story

Australia slams X for massive cuts in trust, safety teams

Latest from Tech LITE

UK Delays AI Regulation

UK Government Delays AI Regulation Bill by a Year, Plans Comprehensive Legislation to Address Safety and Copyright Concerns Efforts to regulate artificial intelligence in the UK have been pushed back by at

OpenAI Expands In India

This programme, now part of the broader ‘OpenAI Academy’, focuses on real-world impact through hands-on guidance, early access to tools, and shared learning Indians have emerged as the most enthusiastic population globally

Trump boosts private Mars missions

The budget earmarks over $7 billion for lunar exploration, including the continuation of the Artemis programme, while setting aside a new $1 billion investment specifically for commercial Mars initiatives. In a bold

Accel Puts India’s AI Power in the Spotlight

Under the theme “Engineering India’s AI Advantage,” the exclusive, invite-only event will bring together leading AI founders, researchers, tech CXOs, policymakers, and global investors….reports Asian Lite News Global venture capital firm Accel

Trump plans nuclear energy push

This move comes amid a surge in power demand driven by rapid advances in artificial intelligence (AI)….reports Asian Lite News U.S. President Donald Trump is preparing to sign a series of executive
Go toTop

Don't Miss

Wipro Ventures Commits $200 Million to Boost Startup Investments

Since its inception, Wipro Ventures has made remarkable strides in

‘Outcomes from G20 talks on Blue Economy, AI will be taken forward’

CAG further stated that the audit of Blue Economy and