January 15, 2023
3 mins read

ChatGPT helps hackers write malicious codes, steal data

Cyber-security company Check Point Research (CPR) is witnessing attempts by Russian cybercriminals to bypass OpenAI’s restrictions, in order to use ChatGPT for malicious purposes….writes Nishant Arora

Any technology has two sides to it and artificial intelligence (AI)-driven ChatGPT (a third-generation Generative Pre-trained Transformer) is no exception. While it has become a rage on social media for answering like a human, hackers have jumped onto the bandwagon to misuse its capabilities to write malicious codes and hack your devices.

Currently free to use for the public as part of a feedback exercise (a paid subscription is coming soon) from its developer Microsoft-owned OpenAI, ChatGPT has opened a Pandora’s Box as its use is limitless — both good and bad.

Cyber-security company Check Point Research (CPR) is witnessing attempts by Russian cybercriminals to bypass OpenAI’s restrictions, in order to use ChatGPT for malicious purposes.

In underground hacking forums, hackers are discussing how to circumvent IP addresses, payment cards and phone numbers controls — all of which are needed to gain access to ChatGPT from Russia.

CPR shared screenshots of what they saw and warns of the fast-growing interest of hackers in ChatGPT to scale malicious activity.

“Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes. We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations,” warned Sergey Shykevich, Threat Intelligence Group Manager at Check Point.

Cybercriminals are growing more and more interested in ChatGPT, because the AI technology behind it can make a hacker more cost-efficient.

Just as ChatGPT can be used for good to assist developers in writing code, it can also be used for malicious purposes.

On December 29, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum.

The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.

On December 21, a threat actor posted a Python script, which he emphasized was the “first script he ever created”.

When another cybercriminal commented that the style of the code resembles OpenAI code, the hacker confirmed that OpenAI gave him a “nice (helping) hand to finish the script with a nice scope”.

This could mean that potential cybercriminals who have little to no development skills at all, could leverage ChatGPT to develop malicious tools and become a fully-fledged cybercriminal with technical capabilities.

Another threat is that ChatGPT can be used to spread misinformation and fake news. OpenAI, however, is already alert on this front.

Its researchers have collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory in the US to investigate how large language models might be misused for disinformation purposes.

As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education and science.

“But, as with any new technology, it is worth considering how they can be misused. Against the backdrop of recurring online influence operations-covert or deceptive efforts to influence the opinions of a target audience,” said a latest report based on a workshop that brought together 30 disinformation researchers, machine learning experts, and policy analysts.

“We believe that it is critical to analyse the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale,” the report mentioned.

ALSO READ: India, UAE ink deal on green hydrogen, under sea connectivity

Previous Story

Al Jaber calls for a COP of action

Next Story

Tesla cuts prices of its EVs in US, Europe to boost demand

Latest from Tech LITE

India’s EV sales need turbo boost

India must accelerate EV adoption by 22% in five years, or risk missing its 2030 green mobility target, warns NITI Aayog….reports Asian Lite News India will need to accelerate electric vehicle (EV)

Uber Targets India Dominance

On the subject of travel, Khosrowshahi observed that booking processes remain outdated and ripe for disruption. “I don’t think that the travel industry has innovated that much Uber CEO Dara Khosrowshahi has

Arab League urges Bigger AI investments

A central message of the Arab AI Forum was the urgent adoption of the league’s recently endorsed ethical AI charter….reports Asian Lite News In a defining moment for the future of artificial

Japan City Limits Smartphones

The proposal comes as new figures from Japan’s Children and Families Agency show that young people in the country spend an average of more than five hours online each weekday A city

India Embraces AI Future

Upskilling is emerging as a critical focus, with 51 per cent of leaders naming it their top priority. Around 63 per cent of managers expect AI training to become a core team
Go toTop

Don't Miss

ChatGPT performs poorly at US’ urologists exam

The explanations provided by ChatGPT were longer than those provided

Italy bans ChatGPT citing data breach

Italy’s privacy watchdog said a data breach was reported affecting