How Hackers are Outsmarting Modern Security Defenses and Why Behavior Context Matters
From stolen session cookies to deepfakes your own mom would fall for, the tactics outsmarting enterprise security require a fundamental shift in how we think about protection.
January 22, 2026
/
12 min read

For years, security strategy followed a simple rule: build a strong wall to keep attackers out and trust everything inside the network. Firewalls, VPNs, and on‑prem monitoring protected a world where users, devices, and data all lived inside the same castle.
That world is long gone.
Today’s work is remote‑first and SaaS‑first. Employees can operate from anywhere, often on personal devices your organization doesn’t fully manage. Critical data sits across email, cloud storage, collaboration tools, and line-of-business apps you don’t own or control. At the same time, AI assistants are logging in and taking actions on behalf of employees, often without a human in the loop. Together, these shifts have stretched your environment far beyond anything a traditional perimeter can protect.
Evolving Threat Tactics Pose New Dangers
Attackers have kept pace with these changes in all the wrong ways. They have industrialized their operations by renting ready‑made phishing kits and MFA‑bypass tooling as subscription services, and now layer in deepfake voices to evade controls that still depend on human verification. In this environment, the old castle‑and‑moat model fails in predictable ways:
A remote employee’s personal laptop may have weak encryption and limited endpoint protection, enabling real-time credential harvesting.
Cloud documents and collaboration spaces are accessed directly from the internet without VPNs or meaningful context on who or what is connecting.
Once an endpoint or identity is compromised, internal trust lets attackers move laterally and quietly escalate their reach long before anyone sees an alert.
Taken together, these shifts point to an uncomfortable truth. You can’t stop every intrusion—so your real advantage is how fast you recognize the abnormal behavior that signals something is wrong.
The False Comfort of Multi-Factor Authentication
Not long ago, enterprises were sold MFA as the silver bullet for credential‑based attacks. Many assumed that once MFA was in place, stolen usernames and passwords would no longer be enough to gain access, so credential phishing dropped down the risk list.
In reality, MFA protects the initial login, but it is only one layer of defense. MFA can be bypassed if an attacker can hijack a legitimate session, and attackers have systematically developed techniques to do exactly that.
One of the most effective approaches relies on adversary‑in‑the‑middle phishing proxies such as Evilginx. These platforms sit between the user and the legitimate authentication service, show the exact login page the user expects to see, relay the MFA prompt, and capture the resulting session cookie. This is what the attacker really wants because it allows them to operate as the legitimate user without triggering additional MFA challenges.
According to the 2025 Verizon DBIR report, 31% of MFA bypass attacks relied on token theft, making it the most commonly observed technique. This trend is closely tied to the growth of phishing‑as‑a‑service offerings that package these techniques as subscription products.
When Cybercrime Became a Subscription
Phishing‑as‑a‑service (PhaaS) or “phishing on demand” is where criminals pay for ready‑made kits and infrastructure that let them run advanced campaigns without much technical skill. This is a significant evolution from the older efforts to commoditize phishing by selling bundles of compromised accounts on the dark web for a one‑time fee.
PhaaS operations are designed to be resilient. Taking down one popular service is usually followed by several new ones popping up in its place, and many providers now advertise over mainstream messaging channels where access can be bought almost as easily as consumer software. Modern kits often include ‘bulletproof’ or self‑healing links. They keep malicious pages alive by routing them through attacker‑controlled infrastructure that can spin up a new phishing page when platforms like SharePoint or Adobe remove the old one.
As these capabilities spread, it becomes harder to say who is really behind an attack. While some organizations still try to link campaigns to nation‑state actors or named groups, many damaging incidents now come from relatively low‑skilled actors who rent sophisticated tools for a few hundred dollars. Teenagers running “done for you” attacks while simultaneously playing Minecraft are just as likely to sit behind high‑impact social engineering incidents as the seasoned groups you’re used to tracking.
How Attackers Blend Into Normal Activity
Once attackers have a foothold, they focus on blending in with your normal activity. Legacy tools may never raise an alarm. A foothold often starts with a phone‑based phishing call or request for quote (RFQ) email that nudges the recipient to install “support” software or trust a completely routine‑looking link. From there, the attacker’s moves look a lot like everyday business, even as they move laterally, stage data theft, or set up ransomware.
We keep seeing the same patterns:
Attackers install legitimate remote access tools such as ScreenConnect to control machines exactly the way your IT teams and service providers already do. A live attack session becomes almost indistinguishable from a real support session.
Highly convincing RFQ and invoice fraud scams are crafted to mimic normal customer interactions. Messages often appear legitimate because they come from real vendor accounts that have been compromised.
Low‑effort reconnaissance emails go to massive distribution lists with vague prompts like “What is this message?” Attackers then target people who reply with tailored credential phishing or payment fraud because they have already shown they will engage with unknown senders.
At the infrastructure layer, phishing pages increasingly sit on platforms such as SharePoint, Adobe, and DocuSign that your business depends on every day. You cannot block these services outright, and link‑scanning that focuses on known‑bad domains will often mark them as safe because the host itself is legitimate. OAuth abuse creates a similar blind spot. When attackers get valid tokens from third‑party apps connected to tools like Salesforce or Salesloft, they can pull data through the same APIs your legitimate integrations use. To traditional controls, it appears to be normal application activity.
Once these techniques are combined with clever use of AI, the gaps in rule‑based controls become even more obvious, as the next story shows.
The Customer Who Nearly Lost $240,000 to a Deepfake
A global enterprise believed it had strong safeguards around changes to supplier banking details, since any request to update payment information required a phone call for verification. On paper, that looked like a robust control against invoice fraud. In practice, it wasn’t enough.
An attacker began by compromising a third-party vendor account and sending an email requesting updated bank information. The message came from a known contact and matched prior communication patterns. A lookalike domain on the CC line gave the attacker a backup channel if access to the original account was lost, but to a busy accounts payable team, the email looked routine.

Following policy, the AP specialist picked up the phone and spoke to a familiar voice that confidently confirmed the new banking details. What the organization could not see was that the attacker had taken a publicly available webinar recording, fed it into AI tools, and generated a voice clone accurate enough to hold a real‑time conversation. This was convincing enough for the AP specialist to move forward with updating the banking information.

The fraud attempt was only stopped because Abnormal’s behavioral AI flagged anomalies in the email thread and surrounding context, from the typosquatted domain to the structure of the request. Traditional tools likely would have treated the transaction as legitimate, because every control on the checklist had technically been followed.
Beat Deepfakes With Behavioral AI Detection
These deepfake attacks and AI-generated impersonations highlight a fundamental limitation of traditional security controls. When attackers can convincingly mimic voices, writing styles, and login patterns, validating a single event—such as a successful authentication or a familiar sender—no longer provides enough confidence.
The advantage comes from understanding behavior in context and over time. Instead of asking whether a login or message passed a check, behavioral detection asks whether the sequence of actions that follows aligns with how that user, system, or AI agent normally operates.
Abnormal’s behavior platform applies this approach across email, identity, and SaaS activity. By modeling millions of legitimate interactions, it establishes a baseline of known-good behavior that reflects real workflows, communication patterns, and access paths. Activity that deviates from those patterns, no matter how legitimate it appears on the surface, is automatically flagged.
In practice, this could mean identifying access from a familiar region that is still risky because it originates from infrastructure your workforce never uses, or detecting automation replaying stolen session cookies when authentication logs show tooling that does not match real user behavior.
Because Abnormal uses self-learning AI, it evolves alongside attackers. As adversaries adopt generative tools and continuously adjust tactics, Abnormal adapts without relying on brittle rules or constant manual tuning, making it possible to surface misuse of access earlier and with greater confidence.
Redefining a Strong Security Posture
Organizations need to move on from a security strategy built only on MFA, endpoint agents, and static policies. These controls still matter, but they no longer define a strong posture in an environment where attackers blend in rather than break in.
Identity compromise is no longer a question of if, but when. Effective security is designed to detect and disrupt abnormal behavior quickly, not just stop everything at the front door. The organizations that cope best act on the reality that the perimeter is already gone—and they adapt before attackers do.
Interested in learning more about how behavioral AI can protect your organization from advanced threats? Schedule a demo with Abnormal today!
Related Posts
Get the Latest Email Security Insights
Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.


