Nora PR Analyzer
As AI writes more of Abnormal’s code, context quality becomes the limiting factor. Shrivu Shankar built NoraPR Analyzer to close that gap by analyzing how AI-written code differs from human-written code and automatically generating the documentation needed to make AI better next time.
January 14, 2026
Context Is the Hidden Bottleneck
As AI becomes the default way code is written at Abnormal, a new challenge has emerged. While tools like NoraPR can generate high-quality pull requests from prompts, their effectiveness depends heavily on the context they’re given. In some parts of the codebase, AI performs exceptionally well. In others, engineers hit friction, produce manual fixes, or abandon AI altogether.
Shrivu Shankar saw that this inconsistency wasn’t a tooling problem, but a context problem.
NoraPR is built on Claude Code and deeply optimized for Abnormal’s codebase. But even with those optimizations, its performance varies. Some teams prompt once and get perfect results. Other teams struggle repeatedly, unsure whether the issue is the tool, the prompt, or the codebase itself.
At the same time, improving context is unintuitive. Engineers are asked to maintain Markdown documentation so AI can reason better, but the ROI isn’t always clear. Writing documentation feels disconnected from shipping features, and simply asking AI to “write better docs” often produces generic or low-quality output.
What’s missing is feedback. Engineers don’t have a clear signal for why AI failed or what context would have helped it succeed.
NoraPR Analyzer
NoraPR Analyzer introduces a new idea: let AI learn from the difference between how humans code and how AI would have coded the same change.
The system works quietly in the background. When a pull request is created manually or through AI-assisted workflows (rather than AI-initiated ones), NoraPR Analyzer detects it automatically. It then reconstructs what the original prompt would have been if the engineer had asked AI to implement the same change.

Next, it runs a “ghost” Claude Code instance using that reconstructed prompt. This produces a hypothetical AI-generated version of the change. The analyzer then compares the ghost result to the actual human-written PR and identifies where AI fell short.
Those gaps become the most valuable insight. Instead of writing generic documentation, the system generates targeted guidance that explains exactly what AI missed and why. For example, it might surface a rule like “Always use the Cassie UID field instead of the output text field,” not because it inferred it from code alone, but because AI failed without that instruction.
The key distinction is that documentation is derived from failure analysis, not from guesswork.
AI Improving Without Human Effort
NoraPR Analyzer doesn’t require engineers to change how they work. There’s no new process to follow and no additional documentation burden. Engineers write code as they normally would, and the system learns from it automatically.
This creates a powerful feedback loop. AI observes human expertise in real production changes, identifies what it couldn’t infer on its own, and converts that insight into durable context for future prompts. Over time, this makes AI-initiated code more reliable, more consistent, and easier to trust across the entire codebase.
It also changes how documentation is perceived. Instead of being a chore, documentation becomes a byproduct of real engineering work, generated only where it meaningfully improves AI performance.
Higher-Quality AI Code at Scale
NoraPR Analyzer addresses several long-standing pain points at once. It reduces friction for teams that struggle with AI tooling, clarifies where AI is strong versus weak, and steadily improves code generation quality without slowing down development.
Perhaps most importantly, it helps Abnormal scale AI-initiated coding responsibly. As AI-generated PRs continue to outpace human-written ones, the analyzer ensures that the system keeps learning from expert behavior rather than drifting toward generic solutions.
What Makes the Nora PR Analyzer So Awesome
The ideas behind NoraPR Analyzer extend far beyond pull requests. Shrivu is already exploring how similar “ghost AI” techniques could be applied to other workflows, such as comparing pre-meeting briefs with actual customer calls, analyzing on-call incident responses, or even re-architecting services in parallel to human work.
The broader vision is AI that works alongside humans in the background, replicating tasks, identifying gaps, and continuously improving organizational context across R&D, GTM, and beyond.
NoraPR Analyzer is an early but important step toward that future. It shows what’s possible when AI isn’t just asked to do work, but is asked to learn from how work actually gets done.
Problem
AI-generated code quality varies across the codebase due to missing or inconsistent context, and engineers lack a scalable way to improve that context.
Solution
A NoraPR plugin that reverse-engineers human PRs into prompts, compares them to AI-generated results, and uses the differences to produce high-signal documentation.
Why it's cool
Instead of asking humans to write better docs, the system learns from real work and improves AI performance automatically, without adding engineering overhead.
Technologies used:
- Nora
- Claude