Claude Code Feedback Loop
AI agents already generate a large share of Abnormal’s code, but when they hit errors in CI environments, the problems often go unnoticed. Shrivu Shankar built the Claude Code Feedback Loop, a system that pipes agent logs into an LLM, extracts failure patterns, and feeds fixes back into development.
November 19, 2025
NOTE: Demo visuals use either blurred real data or synthetic placeholders to protect customer privacy.
The Hidden Problem of AI-Created Code
AI is already writing a significant portion of Abnormal’s code. But what happens when those AI agents hit errors? Shrivu tackled that question with the Claude Code Feedback Loop, a system designed to make AI not just fast, but self-improving.
Claude Code and other AI tools now initiate a large share of pull requests across Abnormal.
While the velocity is impressive, there’s a hidden issue:
Agents sometimes can’t run code fully, especially in CI environments.
Errors occur due to missing binaries, permission issues, or syntax mismatches.
Unlike humans, agents don’t “complain” in Slack or escalate problems, so inefficiencies stay invisible.
Without visibility, both humans and AI lose time when code can’t be properly tested.
In short, AI was generating plenty of code, but not always executing it efficiently.
Building the Claude Code Feedback Loop
Shrivu built a feedback loop to close this gap. The system takes logs from every Claude Code run in CI and pipes them into an LLM for analysis.
Here’s what it does:
Processes thousands of logs to identify recurring problems.
Surfaces inefficiencies, like agents misinterpreting commands or assuming a local environment when running in CI.
Provides actionable feedback. For example, highlighting where AppDev tools weren’t working properly in CI.
Automates fixes by sending the summarized issues back into Claude or Cursor to generate and apply solutions.

Think of it as a “manager for the swarm of agents,” constantly monitoring what’s breaking and feeding improvements back into the system.
Smarter, Faster, and More Visible Engineering
The Claude Code Feedback Loop has already demonstrated key benefits:
Increased velocity: AI agents spend less time failing and more time producing working code.
Systematic improvements: Fixes apply across all runs, so every agent benefits from the lessons of the last.
Visibility: For the first time, teams can see exactly where and why AI agents struggle.
Human relief: Issues that would normally require manual debugging get resolved automatically.
As Shrivu puts it, the system makes AI not just a faster coder, but a smarter one, improving itself with every iteration.
The vision extends beyond code generation:
Applying the same loop to TDD generation, PR reviews, and even on-call alerts.
Using meta-analysis of failures to fix systemic issues before humans ever see them.
Creating a self-sustaining ecosystem where every run strengthens the next.
What Makes Claude Code Feedback Loop Awesome
What’s exciting about Shrivu’s project is the technical ingenuity and ambition. By asking how AI can improve itself, he reframed efficiency as a continuous feedback cycle rather than a one-time optimization.
It’s another example of Abnormal’s innovation culture: employees spotting hidden bottlenecks, building creative AI-powered solutions, and driving systemic improvements that compound over time.
Problem
Claude Code agents sometimes fail to test or execute changes correctly, wasting time and limiting velocity.
Solution
A feedback loop that ingests Claude Code logs, summarizes failures, and uses AI to apply fixes across environments.
Why it's cool
Turns every agent run into a learning opportunity, accelerates AI code generation, and continuously improves system efficiency.
Technologies used:
- Claude
- Cursor