Tenable Research has found that DeepSeek R1 can be tricked into generating malware, raising concerns about the security risks posed by AI-powered cybercrime
When new technologies such as generative artificial intelligence (GenAI) emerge, cybercriminals inevitably look for ways to exploit its capabilities for malicious purposes. While most mainstream GenAI models have built-in safeguards to prevent misuse, Tenable Research has found that DeepSeek R1 can be tricked into generating malware, raising concerns about the security risks posed by AI-powered cybercrime.
To assess the potential threat, Tenable’s security researchers conducted an experiment, evaluating whether DeepSeek R1 could create malicious software under two scenarios: a keylogger and a simple ransomware sample.
At first, DeepSeek R1 refused to comply, as expected. However, using simple jailbreaking techniques, the researchers found that the AI’s safeguards were easily bypassed.
“As AI capabilities evolve, organisations, policymakers, and security experts must work together to ensure that these powerful tools do not become enablers of cybercrime.”
– Nick Miles, staff research engineer at Tenable
“Initially, DeepSeek rejected our request to generate a keylogger,” said Nick Miles, staff research engineer at Tenable. “But by reframing the request as an ‘educational exercise’ and applying common jailbreaking methods, we quickly overcame its restrictions.”
Once these guardrails were bypassed, DeepSeek was able to:
1) Generate a keylogger that encrypts logs and stores them discreetly on a device
2) Produce a ransomware executable capable of encrypting files
The bigger concern resulting from this research is that GenAI has the potential to scale cybercrime. While DeepSeek’s output still requires manual refinement to function effectively, it lowers the barrier for individuals with little to no coding experience to explore malware development. By generating foundational code and suggesting relevant techniques, AI models like DeepSeek could significantly accelerate the learning curve for novice cybercriminals.
“Tenable’s research highlights the urgent need for responsible AI development and stronger guardrails to prevent misuse. As AI capabilities evolve, organisations, policymakers, and security experts must work together to ensure that these powerful tools do not become enablers of cybercrime,” said Miles.