Forcepoint Expert Uses ChatGPT to Create Working Malware Without Writing Code


Published on:

Forcepoint’s Aaron Mulgrew created a working info stealer without writing a single line of code using the AI language model ChatGPT. The purpose of this experiment was to prove that it is possible to use AI chatbots to create malware and that it does not require programming skills. 

Mulgrew bypassed ChatGPT’s abuse protection and convinced it to generate small code snippets that he put back together to create a functioning info stealer. When he tested the malware on VirusTotal, it showed a zero result, meaning that it was undetected by all of the antivirus programs on the platform.

Mulgrew, involved in Forcepoint’s cyber security of critical infrastructure facilities, noted that he does not typically write malware or conduct penetration tests. However, using ChatGPT allowed him to significantly reduce the process of creating malware, which took about four hours. He achieved the desired results by breaking the task into stages and using prompts to guide the AI language model.

The blog entry detailing Mulgrew’s experiment explains how he used prompts to get ChatGPT to generate the desired code. When he asked the AI language model to generate code that could be defined as malicious, it refused, stating that such behaviour would be unethical and immoral. So, Mulgrew broke the task into smaller stages and prompted ChatGPT to generate code fragments that, when combined, would give the desired result.

The first successful hint helped create a mechanism to search for local PNG files larger than 5 MB. After that, ChatGPT supplemented it with code to hide the found PNGs using steganography, for which it requested the appropriate library on GitHub. Mulgrew then created code to search for Word and PDF documents and a more complex mechanism that allows files larger than 1 MB to be split, data blocks to be embedded in PNG, and the total to be hidden using steganography tools. The final code performed the output of information to Google Drive. Checking the malware on VirusTotal showed a result of 5/60. By cleverly calling the stego library, Mulgrew achieved FUD (fully undetectable) with the help of the chatbot.

Mulgrew also suggested to ChatGPT how to reduce the digital footprint, but he never mentioned the goal of bypassing antiviruses. Instead of using obfuscation, he suggested changing variable names and protecting intellectual property, and the chatbot readily obfuscated the code. 

To deliver the malware, Mulgrew asked ChatGPT to create an SCR file, embed the resulting executable into it, and ensure autorun on Windows machines, ostensibly to make it easier for business applications to use. The chatbot only agreed to write the relevant instructions; the result on VirusTotal is 3/69, with the expert suspecting that the flags were set by-products that consider all SCR files suspicious.

Vishak is a skilled Editor-in-chief at Code and Hack with a passion for AI and coding. He has a deep understanding of the latest trends and advancements in the fields of AI and Coding. He creates engaging and informative content on various topics related to AI, including machine learning, natural language processing, and coding. He stays up to date with the latest news and breakthroughs in these areas and delivers insightful articles and blog posts that help his readers stay informed and engaged.

Related Posts:

Leave a Reply

Please enter your comment!
Please enter your name here