netrecon.
booting

What you see vs. what the browser does: adding real-render diffs

2026-04-23 · Yossi Ben Hagai · 5 min read sreperformancebrowsernetrecon

For six weeks netrecon’s change-detection ran on raw fetches only: follow redirects, pull headers, parse HTML, probe a set of well-known paths. Fast, cheap, subrequest-budget-friendly. It catches a lot — CSP changes, cookie-attribute regressions, new third-party hosts in <script> tags.

And it missed entire classes of regression that matter.

The things you can only see by actually rendering

Three examples I hit in the last week:

  1. A site shipped a bundle that threw an unhandled ReferenceError on load. Raw fetch: clean 200, unchanged headers, unchanged HTML. Real render: console error count went from 0 to 3. If you only watch the response, the site looks healthy. If you watch the browser, it’s visibly broken.

  2. A new analytics tag loaded a third-party host I’d never seen in the HTML. It got injected client-side by Google Tag Manager after page load. The raw HTML had <script src="https://www.googletagmanager.com/.."> which I was tracking. The actual fan-out — 7 more hostnames fetched at runtime — only showed up in the browser’s network panel.

  3. LCP regressed from 1.4s to 3.1s. No header change, no payload change, same number of scripts. A CSS media query was switching on a very large image below the fold. The only way to catch this was to render and observe the LCP entry.

You cannot approximate any of these with curl. They are genuinely new signal.

The cost problem

The reason I didn’t ship this on day one: headless Chromium is the most expensive thing in the Workers ecosystem. On the Workers free tier, Browser Rendering gives you 10 browser-minutes per day and 3 concurrent sessions. A real render averages 4–6 seconds of wall time, so the daily cap is about 100–150 renders.

For a tool that captures snapshots on a cron every 6 hours across multiple targets, that math doesn’t close. If every target got a browser snapshot every 6 hours, even 10 targets would eat the cap.

The fix: make it opt-in and cool down aggressively

The browser render is:

The effect: the expensive signal is available where it’s worth the cost (one click, diagnostic) and absent where it isn’t (automated drift tracking).

Shaping the output for diffs

Real-render output is inherently flaky. Two renders of the same site a minute apart will produce different console-error orderings, sometimes different third-party host timings, sometimes one fewer request because an ad didn’t load.

To keep the diff useful rather than noisy I normalise hard:

The result is a payload where “this render looked like the previous one” is byte-identical JSON, not approximately-equal.

What the diff actually looks like

From a real render of a demo target I broke on purpose:

{
  "browser": {
    "consoleErrors": {
      "count": { "before": 0, "after": 3 },
      "samples": {
        "added": [
          "Uncaught ReferenceError: segment is not defined",
          "Failed to load resource: net::ERR_BLOCKED_BY_CLIENT",
          "[Violation] Forced reflow while executing JavaScript"
        ]
      }
    },
    "network": {
      "thirdPartyHosts": {
        "added": ["cdn.segment.io", "api.segment.io"]
      }
    },
    "timing": {
      "largestContentfulPaint": { "before": 1420, "after": 3180 }
    }
  }
}

Three things jump out immediately without reading code: someone wired up Segment, the analytics call is being blocked by (some) clients, and the LCP nearly tripled. That is the value: each field is already a sentence a human can act on.

Why the AI narrator gets along with this

The diff narrator I shipped last week only sees the delta — not the before/after snapshots. Adding the browser signals didn’t change the narrator at all; it just gave the LLM more citation paths to work with. A typical narration on the diff above now looks like:

Segment analytics was added to the site. Three console errors appeared — the segment is not defined error suggests the script loaded but its global isn’t available at the point it’s being called. Also: cdn.segment.io and api.segment.io are new third-party hosts, and LCP regressed from 1.4s to 3.2s. The LCP regression may be directly caused by the analytics load; worth checking the render-blocking attribute.

Citations: browser.consoleErrors.samples, browser.network.thirdPartyHosts, browser.timing.largestContentfulPaint. All real paths in the diff object. No hallucinations, because the citation whitelist only accepts paths the diff actually contains.

TL;DR

Try it on a target you own at /watch — there’s a “take snapshot + browser render” button next to the normal one.