Hackers Create ChatGPT Rival With No Ethical Limits

Prompt L’oeil Experts have been warning that large language models such as OpenAI’s ChatGPT can be leveraged for nefarious ends by using them for tasks like cranking out phishing emails at incredible scale. Now, the barrier for entry has gone even lower with the arrival of a ChatGPT-like artificial intelligence bot that you can easily prompt to create sophisticated malware, according to a blog post from cybersecurity outfit SlashNext. The system, with the incredible name WormGPT, has apparently been trained specifically on malware data — and, notably, has no safety guardrails, unlike ChatGPT and Google’s Bard. As an example of its prowess,  it can be easily prompted to create malicious software based on Python, as seen in screenshots from PCMag. It’s a bleak sign of the times. Cybersecurity is already a difficult task, but the advent of AI is pushing the sector into new, dangerous territory. Even if WormGPT isn’t going to hack the planet any time soon, at the very least it could be an ominous sign of things to come. Forecast: Chaos SlashNext employees found out about WormGPT on a hacker forum, where the developer has been selling access to the bot since March and boasting that it can do “all sorts of illegal stuff.” WormGPT is apparently built on an older large language model from 2021, GPT-J, which was created by EleutherAI, a non-profit group that has developed open source AI programs. PCMag reports that the hacker behind WormGPT is selling access to the program at the equivalent of $67.44 per month….Hackers Create ChatGPT Rival With No Ethical Limits

Leave a Reply

Your email address will not be published. Required fields are marked *