AI-powered cyber attacks leverage artificial intelligence and machine learning capabilities to enhance attack effectiveness, scale, and evasion. Unlike traditional attacks that required significant technical expertise, these attacks democratize cybercrime by removing barriers that previously limited who could execute sophisticated campaigns.
The fundamental shift lies in how language models process instructions. As Inma Martinez, AI Scientist and Global Chair for GenAI and Agentic AI projects at GPAI, explained in the webinar: "The thing about generative AI and chatbots and language models is that they are meant to operate by being given instructions. And they don't distinguish if the instructions come from the person training them or the person using them."
This characteristic transforms the threat landscape dramatically. Previously, executing a convincing business email compromise attack required language skills, cultural understanding, and patience. Now, attackers with nothing more than malicious intent can generate flawless communications in any language, create convincing supporting infrastructure, and scale operations exponentially.
The marketplace for these capabilities has matured rapidly. FraudGPT and similar tools are readily available on the dark web, providing turnkey solutions for anyone willing to pay. According to research, dark web AI tool mentions on cybercrime forums increased 219%—reflecting explosive growth in criminal AI adoption. As discussed in the Convergence webinar, eight million registered chatbot-enabled attacks were documented in Europe within just six months—demonstrating the unprecedented scale at which these threats now operate.