Not All AI Is Created Equal: What Makes Abnormal's Detection Engine Different

Every vendor claims AI. The difference is whether the system reads intent or chases tactics. Here's what makes Abnormal's detection engine different.

Lily Prest

May 14, 2026

/

4 min read

Placeholder

AI Is the Answer. But Which AI?

The security industry has largely settled on a conclusion: AI-powered detection outperforms rules. But outperforming rules is a low bar. On average, the typical Abnormal customer sees 462 advanced attacks per month bypassing Microsoft native controls per 1,000 mailboxes.1

The harder question is what type of AI—trained on what, tuned how, and detecting what. Whether it's a secure email gateway, an ML classifier trained on threat intelligence, or a programmable detection rule, most approaches still depend on recognizing what an attack looks like, matching against known indicators, flagging suspicious payloads, or scoring messages against threat feeds. That works when attacks leave recognizable signatures. The attacks that cost organizations the most don't. They're text-based, socially engineered, and designed not to look like attacks at all.

We built our detection engine on behavioral AI from the start—not to recognize common attack tactics, but to understand normal behavior well enough to identify attacker intent when something deviates. That foundation shows up in three ways: detection that catches what others miss, precision that makes automation real, and a detection engine that keeps getting sharper.

The Foundation

Attune 1.0, our behavioral foundation model, was trained on more than one billion derived behavioral signals2 from production email traffic spanning thousands of organizations, from mid-market to Fortune 10. Because we connect through API-native access to Microsoft 365 and Google Workspace, Attune ingests signals that gateway-based architectures never touch: internal email, authentication events, tenant configuration, and application permissions. It then evaluates identity, behavior, and content jointly within a single model, not as separate scoring layers combined after the fact.

For every person and vendor relationship, Attune builds an individual behavioral profile: communication patterns, authentication norms, typical request types, normal cadence. That profile defines what normal looks like for a specific identity, not for the organization as a whole.

A sender with a clean authentication record, behaving in ways that do not fit their established pattern, making a request atypical for the relationship: none of those signals alone crosses a threshold. Together, in context, they reveal intent even when there's nothing traditionally malicious to flag.

Because attack patterns seen at one organization inform detection across all others—through derived threat signals, not raw email content—we often encounter a novel campaign across multiple customers simultaneously, building detection confidence before any individual security team could write a rule. A model trained on a single organization's data would never see the pattern.

Detection That Needs No Prior Examples

Where that foundation matters most is where rules-based detection has no answer at all: attacks designed to look like normal business. Every rule requires a prior example; an attack has to succeed before a defense can be written for it. Behavioral modeling breaks that cycle.

In one attack we observed during a risk assessment, a threat actor hijacked an existing invoice thread between a vendor and customer, using a lookalike domain that differed by a single letter. The email contained no malicious links or flagged attachments, just a request to update banking details for an ongoing engagement. The employee replied and looped in two colleagues to help process the change. Conventional rule-based checks had nothing to flag: no malicious URL, no known indicator, no suspicious sender reputation. Behavioral AI caught it because the request didn't match the established behavior of that vendor relationship.

This isn't an edge case. Across 1,400+ organizations, 44% of employees who read a VEC message engage with it—replying, forwarding, or both. In large enterprises, that number reaches 72%. The same principle holds when an internal account is compromised: a hijacked executive's email clears every authentication check, but the message structure and request type deviate from established behavior.

This is also why behavioral AI is the most durable defense against GenAI-generated threats. GenAI makes it easy for attackers to cycle through tactics—new wording, new pretexts, new sender personas. What it doesn't change is the objective: steal credentials, divert a payment, take over an account. The objective stays the same even when the tactics keep changing. Rules detect tactics. Behavioral AI detects intent.

Precision That Makes Automation Real

Detecting what others miss only matters if the system isn't burying teams in false positives to do it. For us, the two aren't a trade-off; they're the same capability.

The majority of our customers spend less than 30 minutes a week in the platform.3 For security teams used to spending hours a day on email alerts, this is far more than an incremental improvement—it's a different operating model. The reason is simple: a model that evaluates intent rather than matching surface patterns draws a sharper line between legitimate communication and attacks, enabling a level of precision that makes full automation viable.

Where most solutions still generate enough false positives to require daily analyst triage, Abnormal customers routinely report false positive rates near zero—one customer recently recorded a single false positive across 4.5 million messages.4 That precision hasn't come at the cost of coverage: since Attune launched, unique attack detections have increased by approximately 68%,⁵ and Attune now powers 85% of detections across the Abnormal Behavior Platform.2

As David Din, CIO of Virginia Beach City Public Schools, put it: "Abnormal is a set-it-and-forget-it solution, taking the worry out of cloud email security. The combination of behavioral AI to find malicious emails and automation to remediate them allows my team to focus on other things."

That efficiency doesn't mean opacity. For every detection, we surface the signals behind the verdict—what triggered it, what behavioral patterns it evaluated, and why it reached its conclusion. As detection becomes more autonomous, that visibility becomes more important, not less.

Detection That Keeps Getting Sharper

Catching novel attacks with precision solves today's problem. But attackers don't stand still, and neither can the model. The next question is whether the system keeps improving without someone manually keeping it current.

With most email security products, staying protected means staying busy: writing rules, updating signatures, tuning models. We work differently: AI-driven systems monitor, label, and tune detection to every customer environment, every day.

An AI-driven labeling pipeline and dedicated analyst team together generate the ground truth that teaches the model which emails are malicious and which are not. Automated systems label at scale: behavioral signals, cross-customer attack patterns, user reports. The analyst team handles the rest, reviewing ambiguous cases daily and ensuring that rare and emerging attack types receive expert-quality labels. Critically, this pipeline runs continuously in production, not just at training time.

The two pipelines reinforce each other. Human labels improve the automated systems; automated signals surface the cases most in need of human review. Every customer report, every analyst review, every detection deployed feeds back into the system. Protection gets sharper every week with no rules to write, no models to tune, and no signatures to maintain.

The Test That Matters

Which AI? The kind that knows your people—tens of thousands of behavioral signals per person mapping communication patterns, authentication norms, typical request types, and relationship history. The kind that reads intent, not just content. The kind that automatically recognizes when behavior is abnormal for a specific person, even if the same behavior would look normal somewhere else in the organization.

Every vendor claims behavioral AI. The best test is whether they can show you the attacks your current solution is missing, in your environment, during an evaluation. Because Abnormal connects through API, with no MX record changes or mail flow disruption, organizations can see those attacks in their own environment without a lengthy deployment process.

See the attacks your current platform might be missing. Schedule a personalized demo.

Schedule a Demo

1Abnormal internal data, September 2025. Average attacks per 1,000 mailboxes across all Abnormal customers with 3,000–5,000 mailboxes using Microsoft 365, comparing April 2023 vs. May 2025.
2Based on internal Abnormal data.
3
Reported across 100 Abnormal customers.
4
Data collected from the Abnormal platform; individual customer result.


Related Posts

Blog Thumbnail
Not All AI Is Created Equal: What Makes Abnormal's Detection Engine Different

May 14, 2026

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Loading...
Loading...