A recently published report by researchers at the SlashNext company warns against dangers related to the use of generative artificial intelligence, in particular a tool called WormGPT.
Generative AI, or generative artificial intelligence, is a machine learning technology that can produce text, video, images, and other types of content. It is a subset of artificial intelligence (AI) that focuses on creating new data rather than simply analyzing existing data.
Potential abuses of systems based on generative artificial intelligence have been scrutinized by the cybersecurity community since the first releases of popular chatbots such as ChatGPT.
Numerous research groups have studied how cybercriminals can exploit this technology to launch sophisticated attacks.
According to a recent analysis published by Check Point which has compared the restrictions implemented by ChatGPT and Google Bard to prevent abuse, the measures taken by Bard are significantly less than those of ChatGPT. This means that malicious actors can easily use Bard to generate malicious content.
The following are the key findings of the report:
- The limitations of Bard are significantly less than those of ChatGPT. As a result, it is much easier to generate malicious content using Bard's features.
- Bard imposes almost no restrictions on the creation of phishing emails, leaving room for potential abuse and exploitation of this technology.
- With minimal manipulations, Bard can be used to develop malware keyloggers, which poses security problems.
- Bard can enable the creation of ransomware with basic functionality.
As evidence of the interest expressed by criminal groups in chatbots such as ChatGPT, Slashnext's experts observed "jailbreak" offers for the interfaces of these tools during their analysis of the criminal underground. Jailbreaks are specialized prompts designed to manipulate chatbot interfaces such as ChatGPT bypassing measures implemented to prevent the disclosure of sensitive information, the production of inappropriate content or even the production of malicious code.
Returning to wormGPTthe tool is advertised on cybercrime forums as an excellent tool for executing sophisticated phishing campaigns and BEC (Business Email Compromise) attacks.
A business email compromise (BEC) attack is a type of cyber attack in which Attackers try to impersonate a trusted company or individual to trick a recipient into revealing sensitive information or performing malicious actions. BEC attacks are often performed via phishing emails created to appear to come from a trusted source, such as a business partner or service provider. The phishing email can ask the recipient to provide personal information, such as a bank account number or password, or it can ask the recipient to perform a malicious action, such as transferring money to an account under the attacker's control.
The advantages of using generative AI for BEC attacks are many and include the creation of emails with impeccable grammar so as not to make the recipient suspicious and the reduction of the entry threshold for the creation of BEC campaigns as no particular technical knowledge is required .
Unlike ChatGPT, WormGPT allows users to perform a wide range of illegal activities and has no restrictions on content creation.
For example, cybercriminals can use WormGPT to automate the creation of highly convincing fraudulent emails designed to trick a specific recipient.
“Our team recently gained access to a tool known as 'WormGPT' via a major online forum often associated with cybercrime. This tool looks like a blackhat alternative to GPT templates, designed specifically for malicious activity". reads the post published by Slashnext. “WormGPT is an AI module based on the GPTJ language model, developed in 2021. It boasts a range of features, including unlimited character support, chat memory retention, and code formatting capabilities.”
The WormGPT authors say their model was trained on a wide range of data sources, with a particular focus on malware-related data.
SlashNext has tested the tool and confirmed that the results are disturbing because it made for a very persuasive and strategically astute email.
"In summary, it is similar to ChatGPT but has no ethical boundaries or limits. This experiment underscores the significant threat posed by AI technologies like WormGPT, even in the hands of novice cybercriminals." concludes the report.
The researchers provided the following recommendations to mitigate AI-driven BEC attacks:
- Receive specific training regarding BEC attacks;
- Take advanced email verification measures