chat
expand_more

Abnormal AI Innovation: Building Internal Tools in Seconds with AI

Learn how Abnormal leverages the latest AI developer tools to slash engineering time and streamline internal operations.
April 21, 2025

Abnormal is at an inflection point in our growth. While shipping products quickly remains critical, our rapid scaling demands internal tooling to speed up investigations for engineering and customer support.

Traditionally, developing these internal tools required multiple engineering sprints and was often deprioritized in favor of customer-facing products, leading to delays or causing the projects to be shelved entirely. This created a constant tension between immediate product needs and long-term operational efficiency.

Large language models (LLMs) have transformed this development process. What once required weeks of planning, coding, and testing can now be accomplished through crafting an effective prompt.

As part of our Abnormal AI Innovation series, we’re highlighting how our engineering team is harnessing AI to build truly AI-native products. Here, we’ll share a few fast-moving case studies that show how we’ve used cutting-edge AI developer tools to spin up internal solutions—sometimes in seconds.

Case Study 1: Notification Tools

Abnormal’s AI Security Mailbox (AISM) is a product that allows employees within a company to report suspicious emails, which are then automatically analyzed using machine learning to detect potential threats. This reduces the manual workload of investigating reported messages for security teams.

Once a phishing report is processed through AISM, Abnormal sends a notification to the reporter to inform them about whether the email was safe, spam, or malicious. Customers value this feature as a built-in security awareness tool, helping employees better recognize threats. The notifications can be configured in “AI mode,” which means we use an LLM to generate the message based on the reported email and our detection system’s analysis. Customers can tailor the tone and content by adjusting the “custom instructions” in their Abnormal Portal settings.

We often get customer questions about how to best write their custom instructions to achieve specific outcomes. To help answer these questions faster, we built a tool that lets us A/B test multiple sets of custom instructions on multiple messages several times. We did this by writing a detailed markdown file with instructions on its expected functionality. This allowed us to create the entire tool in one Cursor Agent prompt, saving multiple days of engineering time.

Markdown File

AI Innovation 1

Cursor Agent Prompt

AI Innovation 5

Result

AI Innovation 2


Why These Prompts Worked

These tools are now widely used by our support team to resolve notification-related tickets. We attribute this success to a few key prompting strategies that allowed us to effectively harness the power of today’s AI developer technologies:

  1. Define Expected Functionality: Each prompt includes a clear explanation of what the tool should do, either visually or in writing. This helps the model interpret possible user inputs and understand how the system should respond.

  2. Include Implementation Examples: We provide relevant code patterns that mirror existing features, ensuring consistency in structure, logic, and style.

  3. Embed Essential Context: Prompts also contain critical supporting files, such as API specifications, database connection details, and type definitions. This reduces the likelihood of the model generating infrastructure components that don’t exist.

Case Study 2: Agent Observability Tools

We also have various engineering tools that are built on a custom, secured fork of Streamlit—a framework for turning ad-hoc scripts into internal tools with UIs. These tools support a range of use cases across testing, observability, and deployments. To accelerate development, we created a dedicated Streamlit usage guide in Markdown. It outlines best practices for integrating with CloudWatch and SQL databases, configuring local development environments, and building consistent user interfaces.

In one particular case, we wanted to add an “Agent Trace Tool,” which provides a unified view into the function-calling behavior of our agents. This visibility helps us fine-tune both system performance and tool prompts. Thanks to the Streamlit guide, we were able to build the entire tool using a single Cursor Agent prompt.

Markdown

AI Innovation 3

Cursor Agent Prompt

AI Innovation 6

Result

AI Innovation 4

Why This Prompt Worked

We’ve launched several internal and customer-facing agents, and as adoption grows, it’s increasingly important to understand how these agents perform in production. The Agent Trace Tool gives us the visibility we need, and a few key decisions helped us build it quickly:

  1. Clear Prompting Practices: Our Streamlit usage guide includes best practices and detailed examples that reduce the risk of hallucinated operations—like incorrect database connections or missing S3 bucket references.

  2. Centralized, Well-Structured Code: We organized the agent trace logic into a single, clearly structured file. This makes the implementation easier for tools like Cursor to parse and understand in context.

  3. Python-Native Tooling: Streamlit’s clean syntax and Python-first design make it especially well-suited for large language models, which are already highly proficient in reading and generating Python code.


Together, these decisions enabled us to develop the tool quickly and ensure it remains maintainable as our agents evolve.

Building Smarter, Shipping Faster

These examples highlight just a few of the internal tools we've built to support our teams. Investing time up front is now an easy decision—our efforts pay off quickly by helping us resolve customer escalations faster and boost engineering productivity. As modern AI developer tools continue to solve many of the implementation details, the real challenge is no longer writing code—it’s having the domain knowledge to craft an effective prompt.

If you’re interested in building cutting-edge cybersecurity products at a company with an AI-first engineering culture, we’re hiring! Check out our careers page to learn more.

See Abnormal’s AI capabilities in action by scheduling a demo today!

Schedule a Demo
Abnormal AI Innovation: Building Internal Tools in Seconds with AI

See Abnormal in Action

Get a Demo

Get the Latest Email Security Insights

Subscribe to our newsletter to receive updates on the latest attacks and new trends in the email threat landscape.

Discover How It All Works

See How Abnormal AI Protects Humans

Related Posts

B 1500x1500 Open Graph Images AI Innovation Blog
Learn how Abnormal leverages the latest AI developer tools to slash engineering time and streamline internal operations.
Read More
B DKIM Replay Google Phishing Attack
Threat actors used DKIM replay to send Google-branded phishing emails that passed authentication checks. Here’s how the attack worked and why it’s hard to catch.
Read More
B 1500x1500 MKT834 Abnormal AI Blog
Discover why Abnormal Security is rebranding to Abnormal AI as the company continues its mission to protect humans from cybercrime.
Read More
B Pig Butchering
Learn about pig butchering fraud, a new threat to organizational security. Explore operational tactics, warning signs, and strategies to safeguard your business.
Read More
B Gamma Attack Story Blog
Attackers exploit Gamma in a multi-stage phishing attack using Cloudflare Turnstile and AiTM tactics to evade detection and steal Microsoft credentials.
Read More
B Proofpoint Customer Story 16
With Abnormal’s behavioral AI, a top healthcare solutions provider addressed gaps left by Proofpoint, automated workflows, and saved 335 SOC hours monthly.
Read More