Artificial Intelligence: A New Hacking Tool for Cybercriminals

Artificial Intelligence: A New Hacking Tool for Cybercriminals
Artificial Intelligence: A New Hacking Tool for Cybercriminals
Cyberattacks, once limited to experts, are now becoming accessible to a much wider audience — including beginners — through the misuse of chatbots, raising concerns that AI could turn into a dangerous tool in the hands of hackers.اضافة اعلان

This phenomenon, referred to as “vibe hacking” (a spin on “vibe coding” — coding by non-specialists), signals “a worrying evolution of AI-assisted cybercrime,” according to the U.S.-based company Anthropic.

In a report published Wednesday, Anthropic — a competitor to OpenAI, the creator of ChatGPT — revealed that “a cybercriminal used the Claude Code tool to carry out a large-scale data extortion attack.”

The Claude Code chatbot, specialized in programming, was exploited to execute attacks that “potentially” targeted at least 17 institutions over the course of a month.

The tool was used to generate malicious software, enabling the attacker to collect personal and medical data, as well as login information, then sort it and send ransom demands of up to $500,000.

Anthropic admitted that its “advanced safety measures” failed to prevent the breach.

This case is not unique, but rather reflects broader concerns shaking the cybersecurity sector since the rapid spread of generative AI tools.

Rodrigue Le Bayon, head of the Computer Attack Alert and Response Center at Orange Cyberdefense, told AFP that “cybercriminals are using AI today to the same extent as other users.”

Password-Stealing Programs

In a June report, OpenAI noted that ChatGPT had been used by an individual to develop malware.

Although these models are designed to block misuse for illegal purposes, techniques exist “to bypass safeguards in large language models so that they generate content they are not supposed to,” explained cybersecurity expert Vitaly Simonovich to AFP.

In March, Simonovich — who works at the Israeli cybersecurity firm Cato Networks — disclosed a novel method enabling inexperienced individuals to create password-stealing malware.

His technique, which he called “Immersive World,” involves describing a fictional universe to a chatbot where “malware development is considered an art,” and then asking the model to roleplay a character.

Simonovich, who failed to trick Google’s Gemini and Anthropic’s Claude but succeeded in generating malware with ChatGPT and Microsoft’s Copilot, said: “This was my way of testing the limits of current language models.”

He warned that “the rise of threats from inexperienced actors will represent an increasing risk to organizations.”

Le Bayon emphasized that the most immediate danger lies in “the growing number of victims,” rather than a surge in highly sophisticated attacks, since “we are unlikely to see very complex malware created directly by chatbots.”

As for AI model security, he stressed that it must be reinforced further, noting that “publishers are currently analyzing usage to strengthen detection of malicious activity.”

(Source: Asharq Al-Awsat)