chat
expand_more

Graph of Models and Features

At the core of all Abnormal’s detection products sits a sophisticated web of prediction models. For any of these models to function, we need deep and thoughtfully engineered features, careful modeling of sub-problems, and the ability to join data from a set of databases. For example, one type of email attack...
February 2, 2021

At the core of all Abnormal’s detection products sits a sophisticated web of prediction models. For any of these models to function, we need deep and thoughtfully engineered features, careful modeling of sub-problems, and the ability to join data from a set of databases.

For example, one type of email attack we detect is called business email compromise (BEC). A common BEC attack is a “VIP impersonation” in which the attacker pretends to be the CEO or other VIP in a company in order to convince an employee to take some action. Some of the inputs to a model detecting this sort of attack include:

  1. Model evaluating how similar the sender’s name appears to match a VIP (indicating impersonation)
  2. NLP models applied to the text of the message
  3. Known communication patterns for the individuals involved
  4. Identity of the individuals involved extracted from an employee database
  5. … and many more

All these attributes are carefully engineered and may rely on one another in a directed graphical fashion.

This article describes Abnormal’s graph-of-attributes system which makes this type of interconnected modeling scalable. This system has enabled us to grow our ML team while continuing to rapidly innovate.

Attributes

We store all original entity data as rich thrift objects (for example a thrift object representing an email or an account sign-in). This allows flexibility in terms of the data types we log, enables easy, backward compatibility, and understandable data structures. But as soon as we want to convert this data into something that will be consumed by data science engines and models, we should convert these into attributes. An attribute is a simply-typed object (float / int / string / boolean) with a numeric attribute ID.

Image for post

Attribute vs Features: Attributes are conceptually similar to features, but they might not be quite ready to feed into an ML model. These should be ready to convert into a form consumable by models. All the heavy lifting should occur at the time of attribute extraction, for example running inference on a model or hydrating from a database.

The core principles we are working off include:

  • Attributes can rely on multiple modes of inputs (Other raw attributes, Outputs of models, Data hydrated from a database lookup or join)
  • Attributes should be flat data (i.e. primitives) and representable in a columnar database
  • Attributes should be simple to convert to features (for example you may need to convert a categorical attribute into a one-hot vector)
  • We will always need to change and improve attributes over time

Consuming Attributes: Once data is converted into a columnar format, it can be consumed in many ways—ingested into a columnar store for analytics, tracked in metrics to monitor distributional shifts, and converted directly into a feature dataframe ready for training with minimal extra logic.

Directed Graph of Attributes

Computing attributes as a directed graph allows enormous flexibility for parallel development by multiple engineers. If each attribute declares its inputs, we can ensure everything is queried and calculated in the correct order. This enables attributes of multiple types:

  1. Raw features
  2. Heuristics that use many other features as input
  3. Models that make a prediction from many other features
  4. Embeddings

Our Attribute Hydration Graph looks like this.

Image for post

Explicitly encoding the graph of attributes seems complex, but it saves us painful headaches down the road when we want to use one attribute as an input to another.

Attribute Versioning

Inevitably, we will want to iterate on attributes and the worst feeling is realizing that the attribute you want to modify is used by ten downstream models. How do you make a change without retraining all those models? How do you verify and stage the change?

This situation comes up frequently. Some common cases include:

  • An attribute is the output of a model or an embedding. You want to re-train the model, but this attribute is used by other models or heuristics.
  • An attribute relies on a database serving aggregate features and we would like to experiment with different aggregate bucketizations.
  • We have a carefully engineered heuristic feature and we would like to update the logic.

If each attribute is versioned and downstream consumers register which version they wish to consume, then we can easily bump the version (while continuing to compute the previous versions) without affecting the consumers.

Scaling a Machine Learning Team

In addition to enabling flexible modeling of complex problems, this graph of models enables us to scale our machine learning engineering team. Previously, we had a rigid pipeline of features and models which was really only allowed a single ML engineer at a time to develop. Now, we can have multiple ML engineers developing models for sub-problems, and then combining the resulting features and models together later.

We need to figure out how to more efficiently re-extract this graph of attributes for historical data and good processes for sunsetting older attributes. We would like to build a system that allows our security analysts and anyone else in the company to easily contribute attributes and allow those to automatically flow into downstream models and analysis. We need to improve our ability to surface relevant attributes and models scores important to a given decision back to the client to understand the reasons an event is flagged. And so much more… If these problems interest you, we’re hiring!

Graph of Models and Features

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Discover How It All Works

See How Abnormal AI Protects Humans

Related Posts

Innovate Summer Update Announcement Blog Cover
Join Abnormal Innovate: Summer Update on July 17 to explore the future of AI-powered email security with bite-sized sessions, expert insights, and exclusive product reveals.
Read More
High Scale Aggregation Cover
At Abnormal AI, detecting malicious behavior at scale means aggregating vast volumes of signals in realtime and batch. This post breaks down how we implemented the Signals DAG across both systems to achieve consistency, speed, and detection accuracy at scale.
Read More
B CISO SAT
Discover how modern CISOs are evolving security awareness training from a compliance checkbox into a strategic, AI-powered program that drives behavior change and builds a security-first culture.
Read More
B Regional VEC BEC Trends Blog
Regional analysis of 1,400+ organizations reveals how geography shapes email security risks. See which regions are most vulnerable to VEC vs BEC.
Read More
B HTML and Java Script Phishing
Explore real phishing attacks that use HTML and JavaScript to bypass defenses and learn what makes these emails so hard to detect.
Read More
B Custom Phishing Kits Blog
Brand-specific phishing kits are replacing generic templates. Learn how these custom phishing kits enable sophisticated impersonation attacks.
Read More