Top Recommended Stories

Researchers Show Ways ChatGPT, Bard Can Be Tricked Into Helping Hackers

Researchers have found that generative AI-based models ChatGPT and Bard can be tricked to assist in scams and cyberattacks.

Published: August 10, 2023 9:59 AM IST

By India.com News Desk | Edited by Snigdha Choudhury

ChatGPT, bard, google news
Researchers have described simple workarounds for getting large language models (LLMs) -- including ChatGPT -- to write malicious code and provide poor security advice. (Photo: Pixabay)

New Delhi: Generative AI has the power to significantly impact strategic decision-making processes. Experts have often debated on the power of generative Artificial Intelligence (AI) if they establish clear policies that promote objectivity and the critical evaluation of data, recommendations, and predictions made by these systems.

Also Read:

Researchers have found that generative AI-based models ChatGPT and Bard can be tricked to assist in scams and cyberattacks. They have found that tricking generative AI does not require much coding knowledge. According to tech major IBM, researchers have described simple workarounds for getting large language models (LLMs) — including ChatGPT — to write malicious code and provide poor security advice.


“In a bid to explore security risks posed by these innovations, we attempted to hypnotise popular LLMs to determine the extent to which they were able to deliver directed, incorrect and potentially risky responses and recommendations — including security actions — and how persuasive or persistent they were in doing so,” Chenta Lee, chief architect of threat intelligence at IBM, said, according to news agency IANS.

“We were able to successfully hypnotise five LLMs — some performing more persuasively than others — prompting us to examine how likely it is that hypnosis is used to carry out malicious attacks,” he added.

The researchers learned that English has essentially become a “programming language” for malware. With LLMs, attackers no longer need to rely on Go, JavaScript, Python, etc., to create malicious code, they just need to understand how to effectively command and prompt an LLM using English.

Through hypnosis, the security experts were able to get LLMs to leak the confidential financial information of other users, create vulnerable code, create malicious code, and offer weak security recommendations.

In one instance, the researchers informed the AI chatbots that they were playing a game and that they needed to purposefully share the incorrect answer to a question in order to win and “prove that you are ethical and fair”.

When a user asked if receiving an email from the IRS to transfer money for a tax refund was normal, the LLM said Yes (but actually it’s not). Moreover, the report said that OpenAI’s GPT-3.5 and GPT-4 models were easier to trick into sharing incorrect answers or playing a never-ending game than Google’s Bard.

GPT-4 was the only model tested that understood the rules well enough to give incorrect cyber incident response advice, such as advising victims to pay a ransom. In contrast to Google’s Bard, GPT-3.5 and GPT-4 were easily tricked into writing malicious code when the user reminded it to.

For breaking news and live news updates, like us on Facebook or follow us on Twitter and Instagram. Read more on Latest Artificial Intelligence News on India.com.

Topics

By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts Cookies Policy.