chat
expand_more

The Adversary's New Assistant Weaponizing AI Chatbots

In Chapter 13 of The Convergence of AI + Cybersecurity series, we dive into how cybercriminals are weaponizing AI chatbots and tools like ChatGPT to launch advanced, automated attacks—and how to stop them.

The Adversary's New Assistant  Weaponizing AI Chatbots

AI tools may have guardrails—but adversaries are relentless in finding ways around them.

In this new era, cybercriminals are manipulating legitimate AI platforms like ChatGPT, Gamma, and Canva to produce convincing phishing lures, malicious scripts, and automated fraud campaigns at scale. What started as a tool for innovation is now being weaponized for exploitation.

In Chapter 13 of The Convergence of AI + Cybersecurity series, experts from Abnormal AI and GPAI uncover how attackers are bypassing safeguards, exploiting generative AI systems, and reshaping the threat landscape with machine-driven deception.

Watch this on-demand webinar to learn:

  • How attackers manipulate traditional AI chatbots to generate harmful outputs

  • Real-world examples of how legitimate AI-powered tools are being abused for fraud and phishing

  • What “adversarial AI” really means—and how defenders can adapt faster

  • Practical steps to protect your organization in an increasingly AI-enabled threat landscape

Fill out the form to view the webinar.

Speakers

Piotr Wojtyla

Head of Threat Intel & Platform

Abnormal AI

Inma Martinez

AI Scientist and Global Chair for GenAI and Agentic AI Projects

GPAI

Need an image here

After viewing this resource, you are eligible for 1 CPE credit through ISC2.

Watch Now

Discover How It All Works

See How Abnormal AI Protects Humans

Learn More About AI

Discover the Latest Abnormal AI Insights