SentinelOne has shed light on BlackMamba ChatGPT polymorphic malware, which uses generative AI to generate polymorphic malware. The malware utilizes a benign executable to reach out to a high-reputation AI (OpenAI) at runtime and return synthesized and polymorphic malicious code designed to steal an infected user’s keystrokes. BlackMamba’s generative AI delivers unique malware payloads each time, making it difficult to detect.
The use of generative AI aims to overcome two challenges that evade detection: first, by retrieving payloads from a “benign” remote source rather than an anomalous C2 (command-and-control server), which means BlackMamba traffic would not be seen as malicious. Second, by utilizing a generative AI that could deliver unique malware payloads each time, security solutions are more likely to be fooled into not recognizing the returned code as malicious.
BlackMamba executes the dynamically generated code it receives from the AI within the context of the benign program using Python’s exec() function. The malicious polymorphic portion remains in memory, which has led BlackMamba’s creators to claim that existing EDR solutions may be unable to detect it.
Although there are fears about the capabilities of AI-generated software, AI is neither inherently evil nor good, and it depends on the quality and diversity of its dataset. Creating new datasets that can be used to train models to detect and respond to threats can level the playing field.
However, AI-based systems have limitations and can be fooled by sophisticated attacks, such as adversarial attacks. Moreover, AI cannot make judgment calls and can reveal bias if the dataset is not diverse. As such, it’s essential to be aware of these limitations and use AI as part of a comprehensive security strategy.
In today’s AI-driven world, human intelligence remains essential, and the human touch remains critical, if not more important. People’s ability to reason, think creatively and critically is necessary to supplement AI’s capabilities.
BlackMamba ChatGPT polymorphic malware and other PoCs raise concerns about the capabilities of AI-generated software, but they are just another twist in the cat-and-mouse game between attackers and defenders. The focus should be on improving defenses and cultivating the skills of the defenders to deter and prevent those who would use AI for malicious purposes.