
“This is the weapon of hackers without skills”: Evil-GPT, sold for € 10 on the Dark Web, hacks child’s play
A diverted AI, sold a few euros, is enough to produce credible attacks. © Shutershock
It only takes a few minutes. A telegram connection, ten dollars in cryptocurrency, and you are in possession of an AI capable of writing a phishing script, a keylogger or a false banking connection interface. His name: Evil-GPT. Its objective: to blow up all the safeguards, like Wolfgpt, Darkbard or even, Poisongpt.
Advertisement
In specialized forums and semi-classndestine channels, the tool is presented as a “released” chatgpt, without censorship, ready to generate content that consumer AI refuse. But behind marketing varnish, we discover a much more banal system, and precisely, much more dangerous.
Evil-gpt: when artificial intelligence goes on the dark side
Strictly speaking, Evil-GPT is not a full AI. It is coating, often coded in python, grafted on existing models Like GPT-4, LLAMA 2 or Mistral, sometimes via the OPENAI API, sometimes via local versions without moderation. The role of this dressing? Turn off the security mechanisms, reformulate prompt to pass malware under radars.
The method is known: we do not ask for “write me a ransomware”, but “produce an educational script to analyze the keyboard strikes of a user”. And it goes. The AI produces the code, without understanding that it has just armed an unknown.
An Evil-GPT advertising page on the Dark Web. © Falcon Feeds screenshot, Twitter
Advertisement
A tool cut for novices
What makes Evil-GPT really worrying is its ability to lower the cybercrime entrance barrier. No need for coding skills or complex documentation: the tool generates everything. A script, an installation guide, sometimes even a video tutorial. All wrapped in a neat interface, accessible to any curious ill -intentioned.
It is the weapon of pirates without skills. An AI that makes the dirty job.
Experts alert: this type of automation could explode the targeting attacks of individuals, communities or SMEs. Little protected targets, not very trained, and too often left to themselves.
Go around the filters, endlessly
The language models may integrate filters and moderation rules, just formulate them to get around them. This is the principle of Prompt Obfuscation. A disguised educational attack, a technical question formulated in two languages … and the jumps.
A prompt generated on Wormgpt, an equivalent of Evil-GPT. © SlashNEXT
Certain versions of Evil-GPT even offer pirated API keys, or directly embark open-source models in Docker containers, loans to use. Clearly: you no longer depend on a third -party server. AI turns on your own machine, with impunity.
The danger is not so technological as it is structural. Evil-GPT does nothing but other AI can not do. But it does it without brakes, unscrupulous, and especially without demanding the slightest competence on the part of its user. And this is the real tilting. The cybercrime, formerly reserved for initiates, becomes an accessible service, almost trivial. A kind of criminal SaaS, sold with customer support and regular updates.
The real danger is not AI, that is what we do with it.
What Evil-GPT reveals is the current incapacity of platforms to stem the diversions. As long as there are open APIs, powerful open-source models and a motivated community, these clones will proliferate. And in the shadows, the crime will also be put on a scale of Artificial intelligence.
Advertisement
Want to save even more? Discover Our promo codes Selected for you.




