GPT-4 is continuing to assist hackers in creating tools for cybercrime

ransomware

The latest version of OpenAI’s machine learning software, GPT-4, was recently launched with much excitement. One of the key features of the new version was supposed to be safeguards that prevented its use for cybercrime. However, it did not take long for researchers to find ways to bypass these safeguards and use GPT-4 to create malware and phishing emails, just as they had done with OpenAI’s previous software, ChatGPT. On the positive side, they were also able to use GPT-4 to identify and fix vulnerabilities in cybersecurity. 

The researchers from Check Point, a cybersecurity firm, demonstrated how they were able to use GPT-4 to create software that collected PDF files and sent them to a remote server, despite OpenAI’s supposed safeguards against malware creation. They were also able to get GPT-4 to provide advice on how to make the malware run more efficiently and evade security software. To create phishing emails, the researchers used a combination of GPT-3.5 and GPT-4. They used GPT-3.5 to write a phishing email impersonating a bank and then requested GPT-4 to improve the language. They also asked for a template for a fake phishing email, which GPT-4 provided.

In summary, despite OpenAI’s efforts to prevent GPT-4 from being used for cybercrime, researchers were able to find ways to bypass these safeguards and use the software for malicious purposes. However, the same technology was also used to identify and patch vulnerabilities in cybersecurity, highlighting the dual nature of these powerful machine learning tools.

OpenAI itself acknowledged in a paper released alongside GPT-4 that the software could reduce the cost of certain steps involved in a successful cyberattack, such as social engineering or improving existing security tools. The paper highlights the potential for GPT-4 to be used for both positive and negative purposes, with the responsibility lying in the hands of those who use it.

OpenAI employed cybersecurity experts to test its intelligent chatbot prior to release. However, the experts discovered that the chatbot had “significant limitations for cybersecurity operations,” as detailed in the accompanying paper. OpenAI noted that the software was not better than existing tools for tasks such as reconnaissance, vulnerability exploitation, and network navigation, and was less effective than existing tools for complex and high-level activities, such as identifying new vulnerabilities. Nonetheless, the hackers did find that GPT-4 was effective in generating realistic social engineering content. 

To prevent potential misuse of the software in cybercrime, OpenAI has taken steps to mitigate these risks. The company trained models to refuse malicious cybersecurity requests and improved its internal safety systems, including monitoring, detection, and response. These measures demonstrate OpenAI’s commitment to responsible AI development and use.

The reason why Check Point researchers were able to bypass some of OpenAI’s mitigations is currently unknown, as the company has not responded to requests for comment on the matter. 

However, cybersecurity expert Daniel Cuthbert believes that while it may be easy to exploit OpenAI’s models, a skilled hacker would already possess the knowledge to carry out cyberattacks without the assistance of artificial intelligence. Furthermore, modern detection systems should be able to identify the malware created with ChatGPT, given that it learns from previous examples on the internet. 

Cuthbert is more enthusiastic about the defensive potential of GPT-4. After detecting bugs in software, the software offered quick remedies with actual code snippets that Cuthbert could readily copy and paste into his program, fixing the issue within seconds. Cuthbert praised the auto-refactoring feature, stating that “the future is cool.”

%d bloggers like this: