Enteprise Context Layer - Auto-Governance (3)
This follow-up takes the next step: move past customer-facing facts and build an Enterprise Context Layer that encodes how Abnormal actually works: our product dependencies, processes, competitive context, retention mental models, onboarding flows, and the “expert reasoning” that rarely lives in a single document.
March 24, 2026
Recap
Andy started by re-grounding the original goal: Abnormal has core knowledge that should be consistent across the company: how products work, sales playbooks, internal processes, and standard “how we do things here.” But as velocity increases, it becomes harder to keep that knowledge centralized and usable.
This became especially obvious while building help.normal.ai. The insight: AI can’t reliably answer questions (or generate high-quality outputs) if it’s operating on inconsistent, conflicting, or incomplete data.
The first concrete implementation of that vision was Support Docs: a GitHub repo maintained in a “Claude Code for documentation” style. In practice, that meant:
humans could initiate changes in a Slack channel (e.g., “fix broken links” or “create an identity security article”)
the AI would implement those changes in the repo
once reviewed, a simple publish command would push updates straight to the help site
That version proved the workflow end-to-end: once knowledge is updated, “time to customer” can drop from days/weeks to minutes, or even seconds.
The New Capabilities
In this follow-up, Andy described the key limitation of the first version: Support Docs are mostly raw customer-facing facts: FAQs, product explanations, “what is X” content. But many of the hardest internal questions aren’t answered by raw facts alone. Experts answer using mental models:
how our org works
how teams interact
how processes actually flow
what’s implied, not explicitly written
what the tradeoffs are when sources disagree
So Andy pushed beyond Support Docs into a broader Enterprise Context Layer, maintained by a swarm of AI agents running in a sandbox environment. What changed technically and operationally? A swarm of agents contribute continuously (already making 1,000+ commits) and agents coordinate via lightweight mechanisms (e.g., file locking) to avoid conflicts.

The guiding instruction is intentionally minimal: build the best tribal-knowledge layer possible from the perspective of top PMs, SEs, Legal/Privacy, and engineers. The layer produces knowledge artifacts that often don’t exist anywhere else as a unified view:
A unified data retention mental model (a diagram that likely doesn’t exist as a single “source of truth” doc today).
A deep, comprehensive Proofpoint competitive dive that surpasses what currently exists in one place.
A product dependency chain diagram generated as a byproduct of training on real internal questions, something that may not exist anywhere else in that unified format.

Centralized mental models for how we onboard customers, how POVs work, and how the POV lifecycle ties to documentation, assembled from scattered “pieces” into one coherent picture. There are three agent “roles” powering the system, with builder agents that continuously add useful context (“add things as you see fit”), maintainer agents that clean and improve existing docs, and evaluation/training agents that learn from real questions.
Andy scrapes questions from GTM Help/help channels, and the agent must answer using only the context layer. Andy then provides the “reference answer” the company actually gave, and the agent identifies what it was missing and updates the layer accordingly.
That third loop is the big leap: instead of hoping documentation gets better, the system has a built-in mechanism for discovering what’s missing and patching it.
The Impact
This work moves the Enterprise Context Layer from a “docs pipeline” into something closer to a knowledge operating system. Andy connected this directly to other workflows across the company:
generating slides
writing websites (e.g., Brandon’s pipeline)
generating ads
answering nuanced internal questions
The context layer becomes the substrate that makes those artifacts more consistent and trustworthy.
As code and product changes accelerate, the risk isn’t just “people don’t know.” It’s that everyone knows different versions of the truth. This is an attempt to keep the organization’s mental model synchronized.
The most valuable part is encoding how experts reason, not only what the official docs say. That’s what lets AI perform better on the messy, nuanced, internal questions where “conflicting sources” are the norm.
What’s Next
Andy’s immediate next step is intentionally simple: keep it running and observe what emerges as it continues to build.
Near-term directions implied by the demo and commentary:
keep pressure-testing answer quality by asking real questions in the channel
refine how the system prioritizes sources and resolves conflicts (Sarah had flagged this as a key challenge in the original presentation)
continue expanding the layer into areas where Abnormal has “partial docs” but not a unified model (product portfolio, onboarding, POV lifecycle, process maps)
use the question → reference answer loop as the ongoing training objective so the layer improves in the exact places teams feel pain
The through-line is clear: the goal isn’t just to publish better docs, but to build a continuously learning context layer that keeps up with how fast Abnormal ships.
Problem
Knowledge fragments across Jira, Slack, playbooks, and memory, making it harder to generate accurate downstream artifacts and for AI systems to stay reliable.
Solution
Many agents continuously build and maintain a Context Layer in GitHub, designed to represent not just raw facts, but mental models experts rely on for decisions.
Why it's cool
It turns “insider knowledge” into an evolving, AI-readable system and uses a feedback-driven “loss function” to automatically identify what context is missing.
Technologies used:
- Claude