Code to Content - Agent Trio Pipeline (4)
Brandon Qin’s latest Code to Content iteration adds a third agent for visuals and a feedback system that learns from edits, enabling faster, higher-quality AI-generated websites.
March 25, 2026
NOTE: Demo visuals include blurred data or synthetic placeholders to protect customer privacy.
Where the System Broke Down
The previous version proved something important: AI could generate usable product pages from code. Content flowed. Layouts rendered. Pages were structurally sound.
But they did not feel finished.
What was missing showed up immediately when scrolling:
Pages lacked visual depth, such as videos, motion, and imagery
Brand consistency across assets was uneven
Feedback improved outputs, but the system did not learn from it

Three-agent pipeline output: writer, builder, and visual agents combine to generate a complete, styled homepage from code inputs.
The result was a gap between “functional” and “production-ready.” Closing that gap required more than better prompts. It required expanding the system itself.
Adding the Third Agent
The latest iteration introduces a third role: the image-and-video agent. If the writer agent defines what to say and the builder agent defines how it’s structured, this new agent defines how it looks and feels.
Together, the system now operates as a coordinated pipeline:
Writer agent ingests GitHub changes and generates structured product content
Content is stored in the CMS as clean, reusable input
Builder agent turns that content into full pages using a component system
Image and video agent generates visual assets that match the brand direction
This separation matters. Instead of having a single system try to do everything, each agent focuses on a specific output. That makes iteration faster and failures easier to isolate.
The visual agent is flexible by design. In the demo, it produces a cyber-style aesthetic with looping background videos and glass-like visuals. But the same system could shift styles entirely depending on inputs, from minimal enterprise to something more playful.
What the System Produces
Brandon’s demo walks through pages generated end-to-end using this three-agent pipeline. A single command triggers the workflow, and within minutes, a complete page is produced.

Builder and visual agents collaborate to structure threat content and dynamically place generated assets within page components.
Each page includes:
Structured product narrative and positioning
Layout components like hero sections, stats, and FAQs
Generated visual assets such as looping videos and animations
Interactive elements pulled from a shared component library
The builder agent decides where components belong. The visual agent fills those components with assets. The result is not a static template but a composed page.
Some elements are still placeholders. Brand alignment is still being refined with design input. But the system is already producing outputs that are directionally close to production.
Capturing Feedback the Right Way
The second major addition is a feedback system that enables scalable iteration.
Previously, feedback lived in Google Docs or ad hoc comments. That worked for humans but not for systems. Edits were applied, but the reasoning behind them was lost.
This version introduces a lightweight feedback app that sits alongside the site.
It allows teams to:
Leave visual feedback directly on rendered pages
Add high-level and inline content comments
Suggest layout changes tied to page structure
Each piece of feedback is structured and stored. A backend system processes these inputs to determine what changed and why. That context is then fed back into the agents.
This turns feedback into a learning signal, not just a one-time correction.
Building a Learning Loop
With this system in place, the workflow changes:
Generate a page
Review it collaboratively
Capture feedback in structured form
Feed that back into the agents
Regenerate with improvements
Over time, the system compounds quality. It does not rely on manual rework for every iteration. It learns patterns, preferences, and standards.

Component-driven generation: the builder agent maps products into structured cards, while the visual agent provides consistent, brand-aligned assets.
This is especially important as velocity increases. Without a feedback loop, faster generation creates more noise. With it, faster generation accelerates improvement.
What Changes for Teams
The impact is not just speed, though the speed is meaningful. Pages that previously took weeks can now be generated in minutes.
The bigger shift is how teams interact with the system:
PMMs focus on refining messaging instead of drafting from scratch
Design influences systems and components rather than individual pages
Engineers ship features that are reflected externally without extra coordination
Marketing gains a continuously improving web presence
There is also early potential for new workflows, including generating custom pages or entire sites tailored to specific customers using internal context.
From Output to Compounding Systems
This iteration marks a shift in the Code to Content series. The system is no longer just generating outputs. It is starting to improve itself.
Three pieces now work together:
Specialized agents for content, structure, and visuals
A shared pipeline that connects them
A feedback loop that drives learning
That combination changes the trajectory. The goal is not just faster websites. It is systems that stay current, improve with use, and reflect what the company builds without constant manual effort.
Not a one-time generation. A system that compounds.
Problem
Even with AI-generated pages, sites lacked visuals, polish, and a scalable way to learn from feedback.
Solution
Episode 4 adds an image/video agent and a structured feedback loop that automatically improves content, design, and layout.
Why it's Cool
Three agents now generate full pages, while feedback turns into system learning, compounding quality without slowing speed.
Technologies used:
- GitHub
- Streamlit