$893M in Losses: What the 2025 IC3 Report Reveals About AI-Driven Cybercrime
The FBI’s 2025 IC3 report logged 22,364 AI-related complaints and nearly $893 million in losses, showing how AI is accelerating business email compromise and social engineering at scale.
April 15, 2026
/
4 min read

The FBI’s 2025 IC3 report confirms what security teams are experiencing firsthand: attackers are using AI to make proven cybercrime tactics devastatingly effective, causing serious financial and reputational losses across industries.
For the first time in its nearly 25-year history, the IC3 report includes a dedicated section on artificial intelligence. That alone is a signal. But the more important outcome is the number attached to it: nearly $893 million in reported losses tied to AI-related complaints in 2025.
Crucially, that impact is concentrated in fraud. The IC3 data shows that 85% of all losses now come from cyber-enabled fraud attacks that often exploit human behavior, not technical compromise—and the human layer is where we see AI-enabled methods succeed at alarming rates.
Abnormal AI Field CISO Patricia Titus puts the scale of the problem into perspective:
"$893 million in AI-enabled fraud. $3 billion from Business Email Compromise alone. $20.87 billion in total cybercrime losses in 2025. One million complaints filed.
And here's the part that should keep every security leader up at night: the FBI openly admits the AI-related number is almost certainly higher than reported, because most victims don't even realize AI was involved in the attack against them."
The Big Picture: Cybercrime Is Surging
The broader report is stark. In 2025, the FBI’s Internet Crime Complaint Center received 1,008,597 complaints and recorded $20.877 billion in losses, a 26% year-over-year increase.
Cyber-enabled fraud accounted for roughly 453,000 complaints and more than $17.7 billion in losses, representing about 85% of all reported losses.
The takeaway is straightforward: the cybercrime economy is scaling rapidly, and it is increasingly optimized around human behavior. The majority of financial damage is not coming from technical exploitation, but from simply convincing the user to act by exploiting workflows, communications, and workplace psychology.
$893M and Counting: The Scale of AI-Driven Cybercrime
The FBI logged 22,364 complaints with an AI-related nexus in 2025, with adjusted losses exceeding $893 million.
That level of loss puts AI squarely inside the most financially relevant areas of cybercrime, not as an edge case, but as a category of its own in high-impact fraud.
This year’s loss figure is also likely conservative. AI-related complaints depend on what victims recognize and describe in their submissions, meaning the real impact is almost certainly higher. AI is only counted when it’s obvious. Increasingly, it is anything but.
How Attackers Are Using AI Today
1. Business Email Compromise Gets Smarter
Business email compromise (BEC) remains one of the most financially damaging cybercrime categories in the report, driving $3.046 billion in total losses in 2025. Within that category, the FBI attributed more than $30 million in losses specifically to BEC with a confirmed AI component.
AI is removing one of the last constraints on BEC: execution quality.
Attackers can now generate the perfect impersonation of executive communications at the organization of their choice. Tone, structure, and intent can now be tailored to match internal communication patterns with far greater precision, eliminating the inconsistencies that historically made phishing detectable. And with AI enabling highly targeted, precisely timed delivery, attackers can time and place these messages within real workflows to elicit a response.
Patricia Titus is more direct about how this plays out:
"BEC attackers are now using chat-generation tools to produce executive-impersonation emails with the precise tone, vocabulary, and contextual detail of your specific leadership — then layering in voice cloning to place follow-up calls that sound exactly like your CFO confirming a wire transfer. This isn't phishing. This is precision fraud at machine speed."
BEC has always been effective because it targets trust. AI makes that trust easier to manufacture and harder to challenge based on the message alone.
2. Impersonation Scales Beyond Human Limits
The report also highlights the continued growth of impersonation-based attacks, including government impersonation, which drove $797 million in losses in 2025.
These attacks all rely on the same core mechanic: convincing the recipient that the sender is someone they trust. AI fundamentally changes how that trust is established.
Instead of relying on scripts or templates, attackers now generate context-specific identities matching tone, role, and communication style across organization-specific scenarios. Executive requests, vendor communications, and authority-based outreach can all be produced with a level of consistency that was previously difficult to maintain.
Impersonation is no longer limited by how well an attacker can mimic a single individual. It can be repeated across targets, roles, and organizations with minimal degradation in quality, turning what once required time-consuming research and careful crafting into an automated, scalable attack model.
3. Attacks Become Persistent, Not One-Off
One of the clearest patterns in the IC3 data is that the most costly scams are not single interactions but sustained engagements.
In categories like investment fraud, which accounted for $8.648 billion in losses in 2025, attackers maintain ongoing communication with victims, adapting their messaging over time and reinforcing credibility through repeated interactions.
That same execution model is increasingly visible in enterprise-targeted attacks.
BEC and impersonation campaigns are no longer confined to a single email. They unfold as sequences—initial outreach, follow-up, reinforcement—designed to align with real workflows and pressure recipients into action.
AI makes those sequences easier to generate and harder to distinguish from legitimate communication. Attacks that once depended on a single convincing message now succeed by maintaining credibility over time.
How to Defend Against AI-Driven Cybercrime
The IC3 report doesn’t point to a wholesale shift in attack types; it highlights a shift in how existing ones succeed. That shift requires detection that leverages defensive AI and behavioral intelligence to meet attackers where they live.
Legacy controls that rely on identifying known-bad content, links, or signatures are inherently reactive. They depend on repeatability. AI-driven attacks remove that repeatability.
Effective defense now depends on answering a different question. Not “Does this message look malicious?” but “Does this interaction make sense in the context of what’s normal for a given organization?”
That requires visibility into:
how individuals typically communicate
who they interact with
what normal financial and operational workflows look like
Abnormal’s behavioral approach is built around modeling identity, relationships, and communication patterns across email and connected applications. Instead of relying on static indicators, it establishes a baseline of expected behavior and detects deviations in real time, whether that’s an unusual request, a subtle shift in tone, or a message that doesn’t align with historical patterns.
As AI improves the quality of social engineering, those deviations become the most reliable signal for detecting novel attacks.
AI-Powered Attacks Require AI-Native Defenses
The most important takeaway from the IC3 report isn’t just the eye-opening $893 million tied to AI-related losses. It’s what that number represents: a shift toward attacks that are easier to execute, harder to detect, and increasingly dependent on manipulating human behavior.
AI didn’t create that model, but it has become the driver of next-gen attacks that combine the same human focus of social engineering with the scale and access needed to drive significant losses when a breach succeeds.
Organizations that continue to rely on static detection methods will find those signals increasingly unreliable. Those that invest in understanding behavior, identity, and context will be better positioned to detect the subtle anomalies that define modern AI-powered threats.
Patricia Titus offers one final perspective:
"$893 million is the floor, not the ceiling.
The question for every CISO is simple: was your security stack built for the world the FBI just described?"
Related Posts
Get the Latest Email Security Insights
Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.


