SPA vs. Hypermedia: Real-World Performance Under Load

2026-03-070:164710zweiundeins.gmbh

Technical consulting and implementation for engineering teams: architecture reviews, performance analysis, and maintainable web applications without unnecessary SPA complexity.

Metric SPA Hypermedia Ratio
HTTP Requests 36+ 8 4.5× fewer
Transferred (compressed) ~1.1 MB 41.9 KB 26× smaller
Resources (uncompressed) ~4.5 MB 173 KB 26× smaller
JavaScript (transferred) ~1.1 MB 13.5 KB 80× smaller

The 26× reduction in transferred bytes directly translates to faster loads on slow networks. On a 1.6 Mbps connection, downloading 1.1 MB takes approximately 5.5 seconds before the browser can even begin parsing. 41.9 KB takes about 0.2 seconds.

Performance profiling during actual page loads and chat interactions reveals how each architecture uses CPU resources. The flamegraphs below show Chrome DevTools Performance traces under identical conditions: Slow 4G network throttling and 4× CPU slowdown.

Scripting — JavaScript execution
Rendering — Style/layout calculation
Painting — Pixels to screen
System — Browser internals
Loading — Network activity
Idle — Available for user input
Memory — JS heap (bottom graph)

Page Load Comparison

Drag the slider to compare SPA (left) vs Hypermedia (right)

Hypermedia Hypermedia
SPA/Next.js SPA/Next.js
Page load flamegraphs under Slow 4G + 4× CPU throttling. Notice the dense, tall scripting blocks (yellow) in the SPA vs. the shallow, sparse execution in Hypermedia. The SPA's memory graph (bottom) shows JS heap growing +13.5 MB during initial load—Hypermedia grows only +1.5 MB (9× less).

Metric SPA Hypermedia Ratio
Largest Contentful Paint 8.1s 1.1s 7.4× faster
Total Time 11.56s 2.45s 4.7× faster
Scripting Time 5,613ms 257ms 21.8× less
Main Thread Time 4,145ms 651ms 6.4× less
JS Heap Growth +13.5 MB +1.5 MB 9× smaller
Transfer Size ~1,107 KB 41.9 KB 26× smaller
DOM Nodes, final 326 963 3× more
Event Listeners 349 59 5.9× fewer

The key visual difference in the flamegraphs: the SPA shows tall, dense stacks of JavaScript execution (yellow blocks) that dominate the trace. Hypermedia’s flamegraph is shallow and sparse—most of the time is spent idle or in native browser rendering, not executing JavaScript. Note: Scripting Time and Main Thread Time are from different DevTools views—the Summary panel tallies all scripting across threads (including workers), while Main Thread Time is the aggregate task duration from the Activity tab. The two overlap but don’t cover identical scope, which is why Scripting can appear higher than Main Thread Time.

The higher DOM node count in Hypermedia (963 vs 326) reflects server-rendered content that’s immediately visible. The SPA starts with a minimal DOM and builds it client-side through hydration—which explains why it has 5.9× more event listeners attached despite fewer nodes.

Chat Response Comparison (Streaming)

Drag the slider to compare SPA (left) vs Hypermedia (right)

Hypermedia Hypermedia
SPA/Next.js SPA/Next.js
Chat response flamegraphs. The SPA spends over 4× more time in JavaScript execution (10,500ms vs 2,485ms). Hypermedia distributes work more evenly—spending proportionally more time in rendering and painting as HTML patches hit the real DOM on every SSE chunk.

Metric SPA Hypermedia Notes
LCP 206ms 35ms 6× faster
Scripting 10,500ms 2,485ms 4.2× less JS work
Rendering 1,274ms 2,225ms Hypermedia patches real DOM on every chunk
Painting 538ms 1,240ms More paint = more DOM content
Total 19,686ms 12,500ms 37% faster overall
JS Heap Growth +10.4 MB +1.9 MB 5.5× less memory
V8 Node Allocs ¹ 394 → 822 3,164 → 34,559 Hypermedia high GC churn; SPA modest growth

During streaming, the SPA spends over 4× more time executing JavaScript than the hypermedia version (10,500ms vs 2,485ms). Hypermedia compensates with more rendering and painting time—because Datastar patches the real DOM on every incoming SSE chunk rather than buffering updates in a virtual DOM. The browser’s native HTML parser does the heavy lifting, which shows up as rendering cost rather than scripting cost. Overall Hypermedia completes the full chat response cycle 37% faster (12,500ms vs 19,686ms).

¹ V8 Node Allocs counts all nodes in the V8 heap—attached and detached (pending GC). Datastar replaces DOM patches on each SSE event, creating orphaned nodes that accumulate until garbage-collected; the live DOM after streaming completes is ~400–600 nodes. React reconciles mostly in-place, so far fewer detached nodes accumulate (+428 vs +31,395). This metric reflects GC pressure, not live DOM size.

The 5.5× difference in memory growth (10.4 MB vs 1.9 MB per chat interaction) compounds over time. On mobile devices, high memory pressure triggers garbage collection pauses—visible as UI jank that users perceive as poor performance. Each spike in the memory graph represents potential frame drops. React’s reconciliation overhead is the primary driver: it maintains a virtual DOM tree in memory on top of the real DOM, and that overhead grows with each streamed token.


Read the original article

Comments

  • By scuff3d 2026-03-077:03

    Was expecting to see HTMX, cool to see Datastar in the wild.

    Seems like people are finally rediscovering how much they can do with a lot less. Hope the push for simplicity continues

  • By ricardobeat 2026-03-0712:551 reply

    The point about Brotli compression being extremely efficient for SSE is a great insight.

    It means applications that send “dumb” HTML snapshot updates instead of “optimized” payloads can actually be more efficient while massively simplifying the architecture.

    • By notnullorvoid 2026-03-0718:38

      An optimized fine grained payload with compression can out perform the course approach. Course payloads have added cost to swap in the DOM, and fine grained payloads don't require complex architecture.

      Asides from rendering I have additional concerns about course SSE payloads. You basically remove any cache capabilities (though this is common with all update streaming approaches). Uncompressed the payloads are quite large and the browser may not be able to dispose of that memory, for example Response objects need to hold all body data in memory for the lifetime of the Response because it has various methods to return the whole body as combined views of the buffer. Also benefits of compression for SSE payloads are going to be drastically reduced in situations where connections easily get dropped.

  • By littlecranky67 2026-03-0712:003 reply

    I feel like the SPA vs. SSR debate misses the point: SPAs are most often web applications (as opposed to informational websites). I created SPAs as a contractor for 10+years, and it always has been B2B web apps for large corporations. The users are always professionals who work with the app on a mostly daily basis.

    Since .js, .css and assets are immutable between releases (and modern tooling like NextJS appends hashes to the filenames so they can be served with 'Cache-Control: immutable'), the app is always served from browser cache until there is a new release - which is usually weeks, not days. And if a the browser cache should be empty, you would compare waiting 500ms-1s to use an app that you will use for hours that day. If however, every link click, every route change, every interaction triggers a SSR server roundtrip, the app will not feel snappy during usage.

    Now, if people chose the wrong tool for the job and use a 1MB SPA to serve a landing page, that is where things go wrong. But for me, metrics that include the download time of the .js/.css assets are pointless, as they occur once - relative to total time of app usage. After initial load, the snappyness of your SPA will mostly depend on your database queries and API performance, which is also the case in a SSR solution. YMMV of course.

    • By ricardobeat 2026-03-0712:58

      > if people chose the wrong tool for the job and use a 1MB SPA to serve a landing page, that is where things go wrong

      That is exactly the case. Can’t really blame the people when every learning resource, react evangelist, tweet and post points you towards that.

      > If however, every link click, every route change, every interaction triggers a SSR server roundtrip, the app will not feel snappy during usage.

      SPAs still do the same, maybe more round trips for API requests, we all know how endemic loading spinners have become. Rendering HTML does not meaningfully affect server response times. And frameworks like Datastar (used in this benchmark), htmx, alpine allow you to avoid full page loads.

    • By vrighter 2026-03-085:18

      i have almost never encountered an spa where anything happened without a very perceptible delay. My windows 95 pc on a pentium literally felt snappier. On 100MHz and 8MiB of ram...

    • By yawaramin 2026-03-0720:10

      Yeah but the problem is that people don't just use a single webapp all the time. We all browse and go to many different websites, which all have payloads that they want us to download and run. So in practice it ends up that we're re-downloading bundles constantly, many of them which have the exact same libraries, but because they're bundled and minified, they're not cacheable so we have to fetch them over and over again.

      Don't believe me? Check this out: https://tonsky.me/blog/js-bloat/

HackerNews