chat
expand_more

AI, People, and Policy: What We Learned from Convergence Season 4

Explore key takeaways from Season 4 of Convergence, covering how malicious AI is reshaping cybercrime, why human behavior remains a core vulnerability, and what evolving AI policy means for defenders.
May 22, 2025

Season 4 of The Convergence of AI + Cybersecurity brought together some of the brightest minds in the industry. From ethical hackers and threat researchers to CISOs and global policy leaders, this season featured a powerful mix of perspectives—grounded in reality, guided by experience, and packed with practical insights.

Across three chapters, we unpacked the intersection of human behavior, AI-driven attacks, and the policies shaping our digital future. If you missed it live, no problem. Below, we’ve summarized the key takeaways from each chapter. You can also watch all episodes (including chapters 1–9) in our Resource Center.

Chapter 10: Worm, Fraud, Ghost…Oh My: A Deep Dive into Malicious GPTs

In Chapter 10, Field CISO Mick Leach was joined by ethical hacker Jamie Woodruff and Abnormal’s Head of Threat Intelligence, Piotr Bujtchaila, for a deep dive into malicious AI tools like WormGPT and FraudGPT. These black-market models are not just theory—they’re in active use and capable of producing phishing content, malware, and fully-automated fraud campaigns at scale.

“Malicious GPTs are completely stripped of any ethical safeguards,” said Woodruff. “They’re designed to aid attackers in things like social engineering and malware creation.” Even more concerning, these tools are readily available to non-technical users, making it easier than ever for anyone with malicious intent to launch convincing attacks. “You don’t need to be technically competent,” Woodruff added. “That’s the scary part.”

The conversation also revealed a sharp rise in large-scale AI-enabled fraud campaigns—some of which are structured like businesses, complete with CEOs, COOs, and support teams. According to Bujtchaila, this level of coordination, powered by automation, has turned AI into a “force multiplier for cybercrime.”

But it’s not all bad news. The same technology used to generate these threats can—and must—be used to defend against them. “We’re not fighting hackers in hoodies anymore,” said Woodruff. “We’re fighting AI-powered cybercrime networks…and the only way to stop them is with AI that’s just as smart on the defensive side.”

Watch Chapter 10 On-Demand

Chapter 11: The Human Element of BEC: What's Real, What's Hype, and What's Next

Chapter 11 shifted the focus from machines to people. Dr. Jessica Barker and Abnormal CIO Mike Britton joined Mick Leach to explore the psychology behind business email compromise (BEC)—one of the most costly cybercrimes in the world.

“Cybersecurity is still a people problem,” Barker explained. “Attackers use authority bias, urgency, flattery, and fear to bypass logic and manipulate people into acting.”

The panel emphasized that blaming employees doesn’t improve outcomes. Instead, organizations must foster a culture of empathy, awareness, and proactive education. “When someone falls for a scam, the first thing they say is, ‘I feel so stupid,’” Barker said. “But it doesn’t make you stupid—it makes you human.”

Britton added that the way organizations train employees matters just as much as what they teach. “We’ve created a system where people are afraid to click anything,” he said. “Phishing simulations should be useful—not just a gotcha moment.”

Panelists also called for a shift away from one-size-fits-all awareness training and toward real-time, human-centric education that helps employees recognize evolving threats in context. “You don’t want employees sitting on their hands waiting for security to greenlight everything,” Britton said. “You want them to feel empowered and informed.”

Chapter 12: AI and Cybersecurity Policy: Navigating Regulation and Compliance

In our final chapter of the season, Michael Daniel, President and CEO of the Cyber Threat Alliance, and James Yeager, VP of Public Sector at Abnormal, joined Mick Leach to explore how governments are approaching AI regulation and what security leaders need to prepare for.

“Right now, most governments are still figuring out what they want to regulate,” said Daniel. “It’s early days—and there are no clear answers yet.”

The panel acknowledged that while regulation is inevitable, there’s still an opportunity to shape how it’s implemented. “We need frameworks that protect people without stifling innovation,” said Yeager. “The real challenge is striking that balance.”

One major theme throughout the discussion was explainability. Security leaders want to understand how AI tools reach their decisions—especially in high-risk industries like defense and government. “You can’t just give them a verdict,” Yeager said. “You have to show your work.”

Daniel also warned of regulatory fragmentation if global governments don’t coordinate. “If you're selling in Germany, Israel, and the U.S., you can’t follow three different frameworks,” he said. “We need harmonization—or everyone loses.” As AI adoption accelerates, the path forward will depend on clear guardrails, open collaboration, and a shared commitment to building trustworthy systems.

Looking Ahead: Smarter Threats Demand Smarter Security

Season 4 reminded us that the cybersecurity landscape is changing—fast. But with that change comes opportunity:

  • Use AI to detect what’s abnormal, not just what’s known.

  • Invest in your people—they’re still your strongest defense.

  • Advocate for policies that make security smarter, not slower.

All 12 chapters of The Convergence of AI + Cybersecurity are now available on demand, and each session is eligible for ISC2 CPE credits.

We’re already preparing for Season 5, so stay tuned for what’s next—and in the meantime, see how Abnormal’s AI can help you get ahead of what’s abnormal.

AI, People, and Policy: What We Learned from Convergence Season 4

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Discover How It All Works

See How Abnormal AI Protects Humans

Related Posts

B 1500x1500 MKT889c Forrester Wave
Abnormal AI received the highest scores possible in the Innovation and Roadmap criteria, the top score in the Strategy category, and above-average customer feedback.
Read More
B SEG 5 27 25
Traditional secure email gateways once defined email security. Today, they’re struggling to catch the final—and most dangerous—1% of attacks.
Read More
Cover pptx
Discover how Abnormal Security leverages AI tools like Cursor and Model Context Protocol (MCP) in production to accelerate development.
Read More
B Convergence
Explore key takeaways from Season 4 of Convergence, covering how malicious AI is reshaping cybercrime, why human behavior remains a core vulnerability, and what evolving AI policy means for defenders.
Read More
B Social Engineering
Today’s targeted cyber attacks are so formidable that legacy defences can’t stop them, and even savvy professionals are being fooled. These examples show how sophisticated they’ve become.
Read More
Blog Cover 1500x1500 Template v3 0 DO NOT EDIT OR DELETE
Email bombing turns trusted sources into a smokescreen, flooding inboxes to distract users and hide follow-up threats.
Read More