A hacker has put a malicious twist to ChatGPT—by acting as an aid to cyber criminals.
Getting a first-hand experience with the nefarious tool is email security provider SlashNext who unveiled that the application is being sold by the developer in a forum popular among hackers.
Per the company blog post, SlashNext describes the situation as involving “malicious actors” who are “creating their system modules similar to ChatGPT,” with a twist that it is “easier to use for nefarious purposes”.
Seemingly first introduced as early as March, only to launch in June, the chatbot is akin to ChatGPT or Google Bard but sans the inhibitions that prevent it from responding to ominous requests.

According to the developer, the purpose of the project is to “provide an alternative to ChatGPT” but with the added reason of it allowing “all sorts of illegal stuff” while making it easily marketable online.
See also: How to report scammers in GCash
More specifically, the developer explicitly relates WormGPT with activities that involve Blackhat techniques, essentially paving the way for illicit activities without the will-be perpetrators having to leave the “comforts of their home”.
In showcasing WormGPT’s capability, the developer demonstrated via uploaded screenshots how malware based on the Python programming language can be produced, and even gave tips on how the attacks can be executed.
Disclosing how his creation was made, the developer mentioned using a 2021 version of an open-source large language model called GPT-J.
In a bid to test WormGPT’s capabilities, SlashNext asked the chat to come up with a convincing email for phishing, and the results, according to the company, were “unsettling”—described as “not only remarkably persuasive but also strategically cunning.”