AI and ChatGPT are scary, according to cybercriminals

Many cybercriminals are skeptical about the use of AI-based tools such as ChatGPT to automate their malicious campaigns. 

A new Sophos investigation looked to gauge the interests of cybercriminals by analyzing dark web forums. Apparently, there are many safeguards in place in tools such as ChatGPT, which prevent from automating the creation of malicious landing pages, emails, malware code, and more.

That forced the hackers to do two things: try and compromise ChatGPT accounts (that, as the research suggests, come with fewer restrictions), or pivot towards GhatGPT derivatives – cloned AI writers that hackers built to circumvent the safeguards.

Poor results and plenty of skepticism

But many are wary of the derivatives, fearing that they might have been built just to trick them. 

“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, are more skeptical than enthused,” says Ben Gelman, senior data scientist, Sophos. “Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to where we found 1,000 posts for the same period.”

While the researchers did observe attempts at creating malware or other attack tools using AI-powered , the results were “rudimentary and often met with skepticism from other ,” said Christopher Budd, director, X-Ops research, Sophos. 

“In one case, a actor, eager to the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us,” Budd added.

More from Pro

Source link