chat
expand_more

AI, People, and Policy: What We Learned from Convergence Season 4

Explore key takeaways from Season 4 of Convergence, covering how malicious AI is reshaping cybercrime, why human behavior remains a core vulnerability, and what evolving AI policy means for defenders.
May 22, 2025

Season 4 of The Convergence of AI + Cybersecurity brought together some of the brightest minds in the industry. From ethical hackers and threat researchers to CISOs and global policy leaders, this season featured a powerful mix of perspectives—grounded in reality, guided by experience, and packed with practical insights.

Across three chapters, we unpacked the intersection of human behavior, AI-driven attacks, and the policies shaping our digital future. If you missed it live, no problem. Below, we’ve summarized the key takeaways from each chapter. You can also watch all episodes (including chapters 1–9) in our Resource Center.

Chapter 10: Worm, Fraud, Ghost…Oh My: A Deep Dive into Malicious GPTs

In Chapter 10, Field CISO Mick Leach was joined by ethical hacker Jamie Woodruff and Abnormal’s Head of Threat Intelligence, Piotr Bujtchaila, for a deep dive into malicious AI tools like WormGPT and FraudGPT. These black-market models are not just theory—they’re in active use and capable of producing phishing content, malware, and fully-automated fraud campaigns at scale.

“Malicious GPTs are completely stripped of any ethical safeguards,” said Woodruff. “They’re designed to aid attackers in things like social engineering and malware creation.” Even more concerning, these tools are readily available to non-technical users, making it easier than ever for anyone with malicious intent to launch convincing attacks. “You don’t need to be technically competent,” Woodruff added. “That’s the scary part.”

The conversation also revealed a sharp rise in large-scale AI-enabled fraud campaigns—some of which are structured like businesses, complete with CEOs, COOs, and support teams. According to Bujtchaila, this level of coordination, powered by automation, has turned AI into a “force multiplier for cybercrime.”

But it’s not all bad news. The same technology used to generate these threats can—and must—be used to defend against them. “We’re not fighting hackers in hoodies anymore,” said Woodruff. “We’re fighting AI-powered cybercrime networks…and the only way to stop them is with AI that’s just as smart on the defensive side.”

Watch Chapter 10 On-Demand

Chapter 11: The Human Element of BEC: What's Real, What's Hype, and What's Next

Chapter 11 shifted the focus from machines to people. Dr. Jessica Barker and Abnormal CIO Mike Britton joined Mick Leach to explore the psychology behind business email compromise (BEC)—one of the most costly cybercrimes in the world.

“Cybersecurity is still a people problem,” Barker explained. “Attackers use authority bias, urgency, flattery, and fear to bypass logic and manipulate people into acting.”

The panel emphasized that blaming employees doesn’t improve outcomes. Instead, organizations must foster a culture of empathy, awareness, and proactive education. “When someone falls for a scam, the first thing they say is, ‘I feel so stupid,’” Barker said. “But it doesn’t make you stupid—it makes you human.”

Britton added that the way organizations train employees matters just as much as what they teach. “We’ve created a system where people are afraid to click anything,” he said. “Phishing simulations should be useful—not just a gotcha moment.”

Panelists also called for a shift away from one-size-fits-all awareness training and toward real-time, human-centric education that helps employees recognize evolving threats in context. “You don’t want employees sitting on their hands waiting for security to greenlight everything,” Britton said. “You want them to feel empowered and informed.”

Chapter 12: AI and Cybersecurity Policy: Navigating Regulation and Compliance

In our final chapter of the season, Michael Daniel, President and CEO of the Cyber Threat Alliance, and James Yeager, VP of Public Sector at Abnormal, joined Mick Leach to explore how governments are approaching AI regulation and what security leaders need to prepare for.

“Right now, most governments are still figuring out what they want to regulate,” said Daniel. “It’s early days—and there are no clear answers yet.”

The panel acknowledged that while regulation is inevitable, there’s still an opportunity to shape how it’s implemented. “We need frameworks that protect people without stifling innovation,” said Yeager. “The real challenge is striking that balance.”

One major theme throughout the discussion was explainability. Security leaders want to understand how AI tools reach their decisions—especially in high-risk industries like defense and government. “You can’t just give them a verdict,” Yeager said. “You have to show your work.”

Daniel also warned of regulatory fragmentation if global governments don’t coordinate. “If you're selling in Germany, Israel, and the U.S., you can’t follow three different frameworks,” he said. “We need harmonization—or everyone loses.” As AI adoption accelerates, the path forward will depend on clear guardrails, open collaboration, and a shared commitment to building trustworthy systems.

Looking Ahead: Smarter Threats Demand Smarter Security

Season 4 reminded us that the cybersecurity landscape is changing—fast. But with that change comes opportunity:

  • Use AI to detect what’s abnormal, not just what’s known.

  • Invest in your people—they’re still your strongest defense.

  • Advocate for policies that make security smarter, not slower.

All 12 chapters of The Convergence of AI + Cybersecurity are now available on demand, and each session is eligible for ISC2 CPE credits.

We’re already preparing for Season 5, so stay tuned for what’s next—and in the meantime, see how Abnormal’s AI can help you get ahead of what’s abnormal.

AI, People, and Policy: What We Learned from Convergence Season 4

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Discover How It All Works

See How Abnormal AI Protects Humans

Related Posts

Engineering Hyper Personalized Security Training pptx 1
Explore how Abnormal AI rapidly engineered AI Phishing Coach, a hyper-personalized training platform, by leveraging GenAI, internal developer tools, and an AI-first build process designed for speed and scale.
Read More
Innovate Summer Update Announcement Blog Cover
Join Abnormal Innovate: Summer Update on July 17 to explore the future of AI-powered email security with bite-sized sessions, expert insights, and exclusive product reveals.
Read More
High Scale Aggregation Cover
At Abnormal AI, detecting malicious behavior at scale means aggregating vast volumes of signals in realtime and batch. This post breaks down how we implemented the Signals DAG across both systems to achieve consistency, speed, and detection accuracy at scale.
Read More
B CISO SAT
Discover how modern CISOs are evolving security awareness training from a compliance checkbox into a strategic, AI-powered program that drives behavior change and builds a security-first culture.
Read More
B Regional VEC BEC Trends Blog
Regional analysis of 1,400+ organizations reveals how geography shapes email security risks. See which regions are most vulnerable to VEC vs BEC.
Read More
B HTML and Java Script Phishing
Explore real phishing attacks that use HTML and JavaScript to bypass defenses and learn what makes these emails so hard to detect.
Read More