Computer viruses can spread by using ChatGPT to write sneaky emails

Large language models can be abused by malware to help them avoid detection and propagate by crafting realistic replies to emails.

Researchers have shown that a computer virus can use ChatGPT to rewrite its code to avoid detection, then write tailored emails that look like genuine replies, spreading itself in an email attachment.

Malware could use ChatGPT to rewrite its own code
JuSun/Getty


As well as producing human-like text, large language models (LLMs) – the artificial intelligences behind powerful chatbots like ChatGPT – can also write computer code. David Zollikofer at ETH Zurich in Switzerland and Benjamin Zimmerman at Ohio State University are concerned that this facility could be exploited by viruses that rewrite their own code, known as metamorphic malware.

To see if LLMs could be used in this way, Zollikofer and Zimmerman created a file that can be seeded onto the first victim’s computer through an email attachment. From there, the software can access ChatGPT and rewrite its own code to evade detection.


“We ask ChatGPT to rewrite the file, keeping the semantic structure intact, but changing the way variables are named and changing the logic a bit,” says Zollikofer. This adaptation allows the altered virus to evade routine antivirus scans once the original format has been identified.


Once the virus is rewritten by ChatGPT, the program opens up Outlook in the background of Windows, without the user knowing, and scans the most recent email chains. It then takes the content of those emails and prompts ChatGPT to write a contextually relevant reply, referencing an attachment – the virus – in an innocuous way.


For example, if the program finds a birthday party invitation in your email, it might reply accepting the request and describing the attachment as a playlist of suitable music for the party. “It’s not something that comes out of the blue,” says Zollikofer. “The content is made to fit into the existing content.”


In their experiments, there was around a 50 per cent chance that the AI chatbot’s alterations would cause the virus file to stop working, or, occasionally, for it to realise it was being put to nefarious use and refuse to follow the instructions. But the researchers suggest that the virus would have a good chance of success if it made five to 10 attempts to replicate itself in each computer. OpenAI, the maker of ChatGPT, didn’t respond to a request for comment.


“I think we should be concerned,” says Alan Woodward at the University of Surrey, UK. “There are various ways we already know that LLMs can be abused, but the scary part is the techniques can be improved by asking the technology itself to help. Personally, I think we are only just starting to see the potential for LLMs to be used for nefarious purposes.”


Woodward is unsure if we know how to defend against the threat of technology-assisted hacking. “Much comes down to regulation and legislation, but, of course, criminals won’t observe that,” he says. “I think a research area that needs to be spawned rapidly is how to counter these types of misuses. Perversely, it may be that LLMs can help us.”


Zollikofer agrees that AI could help as well as harm us. “It’s a sort of balance,” he says. “The attack side has some advantages right now, because there’s been more research into that. But I think you can say the same thing about the defence side: if you build these technologies into the defence you utilise, you can improve the defence side.”


Reference:

 arXiv DOI: arXiv:2406.19570

Post a Comment

Last Article Next Article