A recently debuted AI chatbot dubbed GhostGPT has given aspiring and active cybercriminals a handy new tool for developing malware, carrying out business email compromise scams, and executing other illegal activities.
Like previous, similar chatbots like WormGPT, GhostGPT is an uncensored AI model, meaning it is tuned to bypass the usual security measures and ethical constraints available with mainstream AI systems such as ChatGPT, Claude, Google Gemini, and Microsoft Copilot.
GenAI With No Guardrails: Uncensored Behavior
Bad actors can use GhostGPT to generate malicious code and to receive unfiltered responses to sensitive or harmful queries that traditional AI systems would typically block, Abnormal Security researchers said in a blog post this week.
“GhostGPT is marketed for a range of malicious activities, including coding, malware creation, and exploit development,” according to Abnormal. “It can also be used to write convincing emails for business email compromise (BEC) scams,…