Since its launch, the ChatGPT AI chatbot has been used for various tasks, including answering questions and writing code. However, cybersecurity experts at Check Point Research have discovered that hackers are also using ChatGPT to write malicious code and phishing emails. Some of these hackers have little or no programming experience.
ChatGPT has been used to create a Python script that can be turned into ransomware and a script that searches for and sends specific file types, such as PDFs, to a potential attacker’s server. ChatGPT has written code in Java that performs a hidden download of the PuTTY SSH and telnet client and launches the PowerShell interface.
Another script written by ChatGPT was used to launch an online trading platform for exchanging compromised accounts, bank card data, and other virtual goods. This script was connected to a third-party interface to obtain up-to-date data on cryptocurrency quotes.
Researchers at Check Point Research also used ChatGPT to simulate a hacker attack, resulting in the AI writing a phishing email and a malicious VBA macro. The Codex AI code generation system, on the other hand, was able to create a set of malware, including the Reverse Shell interface and scripts for port scanning, sandbox detection, and compiling Python code into a Windows executable file.
While ChatGPT is typically used for tasks such as writing poetry and technical articles, it has now been adopted by the cybercrime community for malware development. In fact, within weeks of ChatGPT’s launch, members of cybercrime forums were using it to create software and emails for illegal purposes. It is unclear if ChatGPT will become a favourite tool of darknet participants, but the cybercriminal community has shown significant interest in using it for generating malicious code.
The use of ChatGPT for malicious purposes highlights the potential dangers of AI and its ability to create harmful code. While chatbots with AI have the potential to be useful tools for providing concise information and generating ideas, they also pose a threat to search engines like Google as Microsoft is also planning to integrate ChatGPT into Bing. If chatbots can form ideas on their own, they could potentially overtake traditional search engines and present a risk to companies that rely on search engine traffic.
AI developers and users must be aware of the possible hazards and take precautions to prevent harmful AI use. This could include proper oversight and monitoring of AI usage and educating users on the potential risks and how to protect themselves. It is also important for AI developers to consider the potential consequences of their creations and ensure that they are not capable of causing harm.