Progressive JSON

2025-06-010:58575235overreacted.io

Why streaming isn't enough.

May 31, 2025

Do you know about Progressive JPEGs? Here’s a nice explanation of what a Progressive JPEG is. The idea is that instead of loading the image top to bottom, the image instead is fuzzy at first and then progressively becomes more crisp.

What if we apply the same idea to transferring JSON?

Suppose you have a JSON tree with some data:

{
 header: 'Welcome to my blog',
 post: {
 content: 'This is my article',
 comments: [
 'First comment',
 'Second comment',
 // ...
 ]
 },
 footer: 'Hope you like it'
}

Now imagine you want to transfer it over the wire. Because the format is JSON, you’re not going to have a valid object tree until the last byte loads. You have to wait for the entire thing to load, then call JSON.parse, and then process it.

The client can’t do anything with JSON until the server sends the last byte. If a part of the JSON was slow to generate on the server (e.g. loading comments took a slow database trip), the client can’t start any work until the server finishes all the work.

Would you call that good engineering? And yet it’s the status quo—that’s how 99.9999%* of apps send and process JSON. Do we dare to improve on that?

* I made it up

Streaming JSON

We can try to improve this by implementing a streaming JSON parser. A streaming JSON parser would be able to produce an object tree from an incomplete input:

{
 header: 'Welcome to my blog',
 post: {
 content: 'This is my article',
 comments: [
 'First comment',
 'Second comment'

If you ask for the result at this point, a streaming parser would hand you this:

{
 header: 'Welcome to my blog',
 post: {
 content: 'This is my article',
 comments: [
 'First comment',
 'Second comment'
 // (The rest of the comments are missing)
 ]
 }
 // (The footer property is missing)
}

However, this isn’t too great either.

One downside of this approach is that the objects are kind of malformed. For example, the top-level object was supposed to have three properties (header, post, and footer), but the footer is missing because it hasn’t appeared in the stream yet. The post was supposed to have three comments, but you can’t actually tell whether more comments are coming or if this was the last one.

In a way, this is inherent to streaming—didn’t we want to get incomplete data?—but this makes it very difficult to actually use this data on the client. None of the types “match up” due to missing fields. We don’t know what’s complete and what’s not. That’s why streaming JSON isn’t popular aside from niche use cases. It’s just too hard to actually take advantage of it in the application logic which generally assumes the types are correct, “ready” means “complete”, and so on.

In the analogy with JPEG, this naïve approach to streaming matches the default “top-down” loading mechanism. The picture you see is crisp but you only see the top 10%. So despite the high fidelity, you don’t actually see what’s on the picture.

Curiously, this is also how streaming HTML itself works by default. If you load an HTML page on a slow connection, it will be streamed in the document order:

<html>
 <body>
 <header>Welcome to my blog</header>
 <article>
 <p>This is my article</p>
 <ul class="comments">
 <li>First comment</li>
 <li>Second comment</li>

This has some upsides—the browser is able to display the page partially—but it has the same issues. The cutoff point is arbitrary and can be visually jarring or even mess up the page layout. It’s unclear if more content is coming. Whatever’s below—like the footer—is cut off, even if it was ready on the server and could have been sent earlier. When we stream data in order, one slow part delays everything.

Let’s repeat that: when we stream things in order they appear, a single slow part delays everything that comes after it. Can you think of some way to fix this?

Progressive JSON

There is another way to approach streaming.

So far we’ve been sending things depth-first. We start with the top-level object’s properties, we go into that object’s post property, then we go into that object’s comments property, and so on. If something is slow, everything else gets held up.

However, we could also send data breadth-first.

Suppose we send the top-level object like this:

{
 header: "$1",
 post: "$2",
 footer: "$3"
}

Here, "$1", "$2", "$3" refer to pieces of information that have not been sent yet. These are placeholders that can progressively be filled in later in the stream.

For example, suppose the server sends a few more rows of data to the stream:

{
 header: "$1",
 post: "$2",
 footer: "$3"
}
/* $1 */
"Welcome to my blog"
/* $3 */
"Hope you like it"

Notice that we’re not obligated to send the rows in any particular order. In the above example, we’ve just sent both $1 and $3—but the $2 row is still pending!

If the client tried to reconstruct the tree at this point, it could look like this:

{
 header: "Welcome to my blog",
 post: new Promise(/* ... not yet resolved ... */),
 footer: "Hope you like it"
}

We’ll represent the parts that haven’t loaded yet as Promises.

Then suppose the server could stream in a few more rows:

{
 header: "$1",
 post: "$2",
 footer: "$3"
}
/* $1 */
"Welcome to my blog"
/* $3 */
"Hope you like it"
/* $2 */
{
 content: "$4",
 comments: "$5"
}
/* $4 */
"This is my article"

This would “fill in” some of the missing pieces from the client’s perspective:

{
 header: "Welcome to my blog",
 post: {
 content: "This is my article",
 comments: new Promise(/* ... not yet resolved ... */),
 },
 footer: "Hope you like it"
}

The Promise for the post would now resolve to an object. However, we still don’t know what’s inside the comments, so now those are represented as a Promise.

Finally, the comments could stream in:

{
 header: "$1",
 post: "$2",
 footer: "$3"
}
/* $1 */
"Welcome to my blog"
/* $3 */
"Hope you like it"
/* $2 */
{
 content: "$4",
 comments: "$5"
}
/* $4 */
"This is my article"
/* $5 */
["$6", "$7", "$8"]
/* $6 */
"This is the first comment"
/* $7 */
"This is the second comment"
/* $8 */
"This is the third comment"

Now, from the client’s perspective, the entire tree would be complete:

{
 header: "Welcome to my blog",
 post: {
 content: "This is my article",
 comments: [
 "This is the first comment",
 "This is the second comment",
 "This is the third comment"
 ]
 },
 footer: "Hope you like it"
}

By sending data breadth-first in chunks, we gained the ability to progressively handle it on the client. As long as the client can deal with some parts being “not ready” (represented as Promises) and process the rest, this is an improvement!

Inlining

Now that we have the basic mechanism, we’ll adjust it for more efficient output. Let’s have another look at the entire streaming sequence from the last example:

{
 header: "$1",
 post: "$2",
 footer: "$3"
}
/* $1 */
"Welcome to my blog"
/* $3 */
"Hope you like it"
/* $2 */
{
 content: "$4",
 comments: "$5"
}
/* $4 */
"This is my article"
/* $5 */
["$6", "$7", "$8"]
/* $6 */
"This is the first comment"
/* $7 */
"This is the second comment"
/* $8 */
"This is the third comment"

We may have gone a little too far with streaming here. Unless generating some parts actually is slow, we don’t gain anything from sending them as separate rows.

Suppose that we have two different slow operations: loading a post and loading a post’s comments. In that case, it would make sense to send three chunks in total.

First, we would send the outer shell:

{
 header: "Welcome to my blog",
 post: "$1",
 footer: "Hope you like it"
}

On the client, this would immediately become:

{
 header: "Welcome to my blog",
 post: new Promise(/* ... not yet resolved ... */),
 footer: "Hope you like it"
}

Then we’d send the post data (but without the comments):

{
 header: "Welcome to my blog",
 post: "$1",
 footer: "Hope you like it"
}
/* $1 */
{
 content: "This is my article",
 comments: "$2"
}

From the client’s perspective:

{
 header: "Welcome to my blog",
 post: {
 content: "This is my article",
 comments: new Promise(/* ... not yet resolved ... */),
 },
 footer: "Hope you like it"
}

Finally, we’d send the comments in a single chunk:

{
 header: "Welcome to my blog",
 post: "$1",
 footer: "Hope you like it"
}
/* $1 */
{
 content: "This is my article",
 comments: "$2"
}
/* $2 */
[
 "This is the first comment",
 "This is the second comment",
 "This is the third comment"
]

That would give us the whole tree on the client:

{
 header: "Welcome to my blog",
 post: {
 content: "This is my article",
 comments: [
 "This is the first comment",
 "This is the second comment",
 "This is the third comment"
 ]
 },
 footer: "Hope you like it"
}

This is more compact and achieves the same purpose.

In general, this format gives us leeway to decide when to send things as a single chunks vs. multiple chunks. As long as the client is resilient to chunks arriving out-of-order, the server can pick different batching and chunking heuristics.

Outlining

One interesting consequence of this approach is that it also gives us a natural way to reduce repetition in the output stream. If we’re serializing an object we’ve already seen before, we can just outline it as a separate row, and reuse it.

For example, suppose we have an object tree like this:

const userInfo = { name: 'Dan' };
 
[
 { type: 'header', user: userInfo },
 { type: 'sidebar', user: userInfo },
 { type: 'footer', user: userInfo }
]

If we were to serialize it to plain JSON, we’d end up repeating { name: 'Dan' }:

[
 { type: 'header', user: { name: 'Dan' } },
 { type: 'sidebar', user: { name: 'Dan' } },
 { type: 'footer', user: { name: 'Dan' } }
]

However, if we’re serving JSON progressively, we could choose to outline it:

[
 { type: 'header', user: "$1" },
 { type: 'sidebar', user: "$1" },
 { type: 'footer', user: "$1" }
]
/* $1 */
{ name: "Dan" }

We could also pursue a more balanced strategy—for example, to inline objects by default (for compactness) until we see some object being used two or more times, at which point we’ll emit it separately and dedupe the rest of them in the stream.

This also means that, unlike with plain JSON, we can support serializing cyclic objects. A cyclic object just has a property that points to its own stream “row”.

Streaming Data vs Streaming UI

The approach described above is essentially how React Server Components work.

Suppose you write a page with React Server Components:

function Page() {
 return (
 <html>
 <body>
 <header>Welcome to my blog</header>
 <Post />
 <footer>Hope you like it</footer>
 </body>
 </html>
 );
}
 
async function Post() {
 const post = await loadPost();
 return (
 <article>
 <p>{post.text}</p>
 <Comments />
 </article>
 );
}
 
async function Comments() {
 const comments = await loadComments();
 return <ul>{comments.map(c => <li key={c.id}>{c.text}</li>)}</ul>;
}

React will serve the output of the Page as a progressive JSON stream. On the client, it will be reconstructed as a progressively loaded React tree.

Initially, the React tree on the client will appear like this:

<html>
 <body>
 <header>Welcome to my blog</header>
 {new Promise(/* ... not resolved yet */)}
 <footer>Hope you like it</footer>
 </body>
</html>

Then, as loadPost resolves on the server, more will stream in:

<html>
 <body>
 <header>Welcome to my blog</header>
 <article>
 <p>This is my post</p>
 {new Promise(/* ... not resolved yet */)}
 </article>
 <footer>Hope you like it</footer>
 </body>
</html>

Finally, when loadComments resolves on the server, the client receives the rest:

<html>
 <body>
 <header>Welcome to my blog</header>
 <article>
 <p>This is my post</p>
 <ul>
 <li key="1">This is the first comment</li>
 <li key="2">This is the second comment</li>
 <li key="3">This is the third comment</li>
 </ul>
 </article>
 <footer>Hope you like it</footer>
 </body>
</html>

However, here’s the kicker.

You don’t actually want the page to jump arbitrarily as the data streams in. For example, maybe you never want to show the page without the post’s content.

This is why React doesn’t display “holes” for pending Promises. Instead, it displays the closest declarative loading state, indicated by <Suspense>.

In the above example, there are no <Suspense> boundaries in the tree. This means that, although React will receive the data as a stream, it will not actually display a “jumping” page to the user. It will wait for the entire page to be ready.

However, you can opt into a progressively revealed loading state by wrapping a part of the UI tree into <Suspense>. This doesn’t change how the data is sent (it’s still as “streaming” as possible), but it changes when React reveals it to the user.

For example:

import { Suspense } from 'react';
 
function Page() {
 return (
 <html>
 <body>
 <header>Welcome to my blog</header>
 <Post />
 <footer>Hope you like it</footer>
 </body>
 </html>
 );
}
 
async function Post() {
 const post = await loadPost();
 return (
 <article>
 <p>{post.text}</p>
 <Suspense fallback={<CommentsGlimmer />}>
 <Comments />
 </Suspense>
 </article>
 );
}
 
async function Comments() {
 const comments = await loadComments();
 return <ul>{comments.map(c => <li key={c.id}>{c.text}</li>)}</ul>;
}

Now the user will perceive the loading sequence in two stages:

  • First, the post “pops in” together with the header, the footer, and a glimmer for comments. The header and the footer never appear on their own.
  • Then, the comments “pop in” on their own.

In other words, the stages in which the UI gets revealed are decoupled from how the data arrives. The data is streamed as it becomes available, but we only want to reveal things to the user according to intentionally designed loading states.

In a way, you can see those Promises in the React tree acting almost like a throw, while <Suspense> acts almost like a catch. The data arrives as fast as it can in whatever order the server is ready to send it, but React takes care to present the loading sequence gracefully and let the developer control the visual reveal.

Note that what I described so far has nothing to do with “SSR” or HTML. I was describing a general mechanism for streaming a UI tree represented as JSON. You can turn that JSON tree into progressively revealed HTML (and React can do that), but the idea is broader than HTML and applies to SPA-like navigations as well.

In Conclusion

In this post, I’ve sketched out one of the core innovations of RSC. Instead of sending data as a single big chunk, it sends the props for your component tree outside-in. As a result, as soon as there’s an intentional loading state to display, React can do that while the rest of the data for your page is being streamed in.

I’d like to challenge more tools to adopt progressive streaming of data. If you have a situation where you can’t start doing something on the client until the server stops doing something, that’s a clear example of where streaming can help. If a single slow thing can slow down everything after it, that’s another warning sign.

Like I showed in this post, streaming alone is not enough—you also need a programming model that can take advantage of streaming and gracefully handle incomplete information. React solves that with intentional <Suspense> loading states. If you know systems that solve this differently, I’d love to hear about them!

Pay what you like

Discuss on Bluesky  ·  Edit on GitHub


Read the original article

Comments

  • By goranmoomin 2025-06-012:205 reply

    Seems like some people here are taking this post literally, as in the author (Dan Abramov) is proposing a format called Progressive JSON — it is not.

    This is more of a post on explaining the idea of React Server Components where they represent component trees as javascript objects, and then stream them on the wire with a format similar to the blog post (with similar features, though AFAIK it’s bundler/framework specific).

    This allows React to have holes (that represent loading states) on the tree to display fallback states on first load, and then only display the loaded component tree afterwards when the server actually can provide the data (which means you can display the fallback spinner and the skeleton much faster, with more fine grained loading).

    (This comment is probably wrong in various ways if you get pedantic, but I think I got the main idea right.)

    • By danabramov 2025-06-012:272 reply

      Yup! To be fair, I also don't mind if people take the described ideas and do something else with them. I wanted to describe RSC's take on data serialization without it seeming too React-specific because the ideas are actually more general. I'd love if more ideas I saw in RSC made it to other technologies.

      • By tough 2025-06-014:201 reply

        hi dan! really interesting post.

        do you think a new data serialization format built around easier generation/parseability and that also happened to be streamable because its line based like jsonld could be useful for some?

        • By danabramov 2025-06-018:472 reply

          I don’t know! I think it depends on whether you’re running into any of these problems and have levers to fix them. RSC was specifically designed for that so I was trying to explain its design choices. If you’re building a serializer then I think it’s worth thinking about the format’s characteristics.

          • By tough 2025-06-0118:15

            Awesome, thanks! I do keep running on the issues, but the levers as you say make it harder to implement.

            As of right now, I could only replace the JSON tool calling on LLM's on something I fully control like vLLM, and the big labs probably are happy to over-charge a 20-30% tokens for each tool call, so they wouldn't really be interested on replacing json any time soon)

            also it feels like battling against a giant which is already an standard, maybe there's a place for it on really specialized workflows where those savings make the difference (not only money, but you also gain a 20-30% extra token window, if you don't waste it on quotes and braces and what not

            Thanks for replying!

          • By dgb23 2025-06-0115:51

            I've used React in the past to build some applications and components. Not familiar with RSC.

            What immediately comes to mind is using a uniform recursive tree instead, where each node has the same fields. In a funny way that would mimic the DOM if you squint. Each node would encode it's type, id, name, value, parent_id and order for example. The engine in front can now generically put stuff into the right place.

            I don't know whether that is feasible here. Just a thought. I've used similar structures in data driven react (and other) applications.

            It's also efficient to encode in memory, because you can put this into a flat, compact array. And it fits nicely into SQL dbs as well.

      • By hn_throwaway_99 2025-06-0212:22

        GraphQL has similar notions, e.g. @defer and @stream.

    • By krzat 2025-06-017:104 reply

      Am I the only person that dislikes progressive loading? Especially if it involves content jumping around.

      And the most annoying antipattern is showing empty state UI during loading phase.

      • By danabramov 2025-06-018:21

        Right — that’s why the emphasis on intentionally designed loading states in this section: https://overreacted.io/progressive-json/#streaming-data-vs-s...

        Quoting the article:

        > You don’t actually want the page to jump arbitrarily as the data streams in. For example, maybe you never want to show the page without the post’s content. This is why React doesn’t display “holes” for pending Promises. Instead, it displays the closest declarative loading state, indicated by <Suspense>.

        > In the above example, there are no <Suspense> boundaries in the tree. This means that, although React will receive the data as a stream, it will not actually display a “jumping” page to the user. It will wait for the entire page to be ready. However, you can opt into a progressively revealed loading state by wrapping a part of the UI tree into <Suspense>. This doesn’t change how the data is sent (it’s still as “streaming” as possible), but it changes when React reveals it to the user.

        […]

        > In other words, the stages in which the UI gets revealed are decoupled from how the data arrives. The data is streamed as it becomes available, but we only want to reveal things to the user according to intentionally designed loading states.

      • By dominicrose 2025-06-028:421 reply

        Smalltalk UIs used to work with only one CPU thread. Any action from the user would freeze the whole UI while it was working, but the positive aspect of that is that it was very predictable and bug free. That's helpful since Smalltalk is OOP.

        Since React is functional programming it works well with parallelization so there is room for experiments.

        > Especially if it involves content jumping around.

        I remember this from the beginning of Android, you'll search for something and click on it and the time it takes you to click the list of results changed and you clicked on something else. Happens with adds on some websites, maybe intentionally?

        > And the most annoying antipattern is showing empty state UI during loading phase.

        Some low quality software even show "There are no results for your search" when the search didn't even start or complete.

        • By igouy 2025-06-0218:12

          > Smalltalk UIs used to work with only one CPU thread. Any action from the user would freeze the whole UI while it was working …

          If that happened maybe a programmer messed-up the green threads!

          "The Smalltalk-80 system provides support for multiple independent processes with three classes named Process, ProcessorScheduler, and Semaphore. "

          p251 "Smalltalk-80 The Language and it's Implementation"

          https://rmod-files.lille.inria.fr/FreeBooks/BlueBook/Blueboo...

      • By sdeframond 2025-06-0111:59

        You might be interested in the "remote data" pattern (for lack of a better name)

        https://www.haskellpreneur.com/articles/slaying-a-ui-antipat...

      • By Szpadel 2025-06-017:573 reply

        alternative is to stare at blank page without any indication that something is happening

        • By withinboredom 2025-06-019:44

          It’s better than moving the link or button as I’m clicking it.

        • By leptons 2025-06-018:05

          I'm sure that isn't the only alternative.

        • By ahofmann 2025-06-018:071 reply

          Or, you could use caches and other optimizations to serve content fast.

          • By withinboredom 2025-06-097:42

            lol. A cache means they already have it. That doesn’t help people who don’t have the asset yet.

    • By hinkley 2025-06-0120:28

      Ember did something like this but it made writing Ajax endpoints a giant pain in the ass.

      It’s been so long since I used ember that I’ve forgotten the terms, but essentially the rearranged the tree structure so that some of the children were at the end of the file. I believe it was meant to handle DAGs more efficiently but I may have hallucinated that recollection.

      But if you’re using a SAX style streaming parser you can start making progress on painting and perhaps follow-up questions while the initial data is still loading.

      Of course in a single threaded VM, you can snatch Defeat from the jaws of Victory if you bollocks up the order of operations through direct mistakes or code evolution over time.

    • By vinnymac 2025-06-013:112 reply

      I already use streaming partial json responses (progressive json) with AI tool calls in production.

      It’s become a thing, even beyond RSCs, and has many practical uses if you stare at the client and server long enough.

      • By motorest 2025-06-016:511 reply

        Can you offer some detail into why you find this approach useful?

        From an outsider's perspective, if you're sending around JSON documents so big that it takes so long to parse them to the point reordering the content has any measurable impact on performance, this sounds an awful lot like you are batching too much data when you should be progressively fetching child resources in separate requests, or even implementing some sort of pagination.

        • By Wazako 2025-06-0112:32

          Slow llm generation. A progressive display of a progressive json is mandatory.

      • By tough 2025-06-014:212 reply

        how do you do that exactly?

        • By danenania 2025-06-0118:111 reply

          One way is to eagerly call JSON.parse as fragments are coming in. If you also split on json semantic boundaries like quotes/closing braces/closing brackets, you can detect valid objects and start processing them while the stream continues.

        • By richin13 2025-06-0112:421 reply

          Not the original commenter but I’ve done this too with Pydantic AI (actually the library does it for you). See “Streaming Structured Output” here https://ai.pydantic.dev/output/#streaming-structured-output

          • By tough 2025-06-0118:13

            Thanks yes! Im aware of structured outputs, llama.cpp has also great support with GBNF and several languages beyond json.

            I've been trying to create go/rust ones but its way harder than just json due to all the context/state they carry over

  • By jatins 2025-06-015:469 reply

    I have seen Dan's "2 computers" talk and read some of his recent posts trying to explore RSC and their benefits.

    Dan is one of the best explainers in React ecosystem but IMO if one has to work this hard to sell/explain a tech there's 2 possibilities 1/ there is no real need of tech 2/ it's a flawed abstraction

    #2 seems somewhat true because most frontend devs I know still don't "get" RSC.

    Vercel has been aggressively pushing this on users and most of the adoption of RSC is due to Nextjs emerging as the default React framework. Even among Nextjs users most devs don't really seem to understand the boundaries of server components and are cargo culting

    That coupled with fact that React wouldn't even merge the PR that mentions Vite as a way to create React apps makes me wonder if the whole push for RSC is for really meant for users/devs or just as a way for vendors to push their hosting platforms. If you could just ship an SPA from S3 fronted with a CDN clearly that's not great for Vercels and Netflifys of the world.

    In hindsight Vercel just hiring a lot of OG React team members was a way to control the future of React and not just a talent play

    • By danabramov 2025-06-018:281 reply

      You’re wrong about the historical aspects and motivations but I don’t have the energy to argue about it now and will save it for another post. (Vercel isn’t setting React’s direction; rather, they’re the ones who funded person-decades of work under the direction set by the React team.)

      I’ll just correct the allegation about the Vite — it’s being worked on but the ball is largely in the Vite team’s court because it can’t work well without bundling in DEV (and the Vite team knows it and will be fixing that). The latest work in progress is here: https://github.com/facebook/react/pull/33152.

      Re: people not “getting” it — you’re kind of making a circular argument. To refute it I would have to shut up. But I like writing and I want to write about the topics I find interesting! I think even if you dislike RSC, there’s enough interesting stuff there to be picked into other technologies. That’s really all I want at this point. I don’t care to convince you about anything but I want people to also think about these problems and to steal the parts of the solution that they like. Seems like the crowd here doesn’t mind that.

      • By andrewingram 2025-06-0112:02

        I also appreciate that you’re doing these explainers so that people don’t have to go the long way round understand what problems exists that call for certain shapes of solutions — especially when those solutions can feel contrived or complicated.

        As someone who’s been building web UI for nearly 30 years (scary…), I’ve generally been fortunate enough that when some framework I use introduces a new feature or pattern, I know what they’re trying to do. But the only reason I know what they’re trying to do is because I’ve spent some amount of time running into the problems they’re solving. The first time I saw GraphQL back in 2015, I “got” it; 10 years later most people using GraphQL don’t really get it because they’ve had it forced upon them or chose it because it was the new shiny thing. Same was true of Suspense, server functions, etc.

    • By liamness 2025-06-0111:171 reply

      You can of course still just export a static site and host it on a basic CDN, as you say. And you can self host Next.js in the default "dynamic" mode, you just need to be able to run an Express server, which hardly locks you into any particular vendor.

      Where it gets a little more controversial is if you want to run Next.js in full fat mode, with serverless functions for render paths that can operate on a stale-while-revalidate basis. Currently it is very hard for anyone other than Vercel to properly implement that (see the opennextjs project for examples), due to undocumented "magic". But thankfully Next.js / Vercel have proposed to implement (and dogfood) adapters that allow this functionality to be implemented on different platforms with a consistent API:

      https://github.com/vercel/next.js/discussions/77740

      I don't think the push for RSC is at all motivated by the shady reasons you're suggesting. I think it is more about the realisation that there were many good things about the way we used to build websites before SPA frameworks began to dominate. Mostly rendering things on the server, with a little progressive enhancement on the client, is a pattern with a lot of benefits. But even with SSR, you still end up pushing a lot of logic to the client that doesn't necessarily belong there.

      • By lioeters 2025-06-0113:20

        > thankfully Next.js / Vercel have proposed to implement (and dogfood) adapters that allow this functionality to be implemented on different platforms with a consistent API:

        Seeing efforts like this (started by the main dev of Next.js working at Vercel) convinces me that the Vercel team is honestly trying to be a good steward with their influence on the React ecosystem, and in general being a beneficial community player. Of course as a VC-funded company its purpose is self-serving, but I think they're playing it pretty respectably.

        That said, there's no way I'm going to run Next.js as part of a server in production. It's way too fat and complicated. I'll stick with using it as a static site generator, until I replace it with something simpler like Vite and friends.

    • By throwingrocks 2025-06-019:05

      > IMO if one has to work this hard to sell/explain a tech there's 2 possibilities 1/ there is no real need of tech 2/ it's a flawed abstraction

      There’s of course a third option: the solution justifies the complexity. Some problems are hard to solve, and the solutions require new intuition.

      It’s easy to say that, but it’s also easy to say it should be easier to understand.

      I’m waiting to see how this plays out.

    • By metalrain 2025-06-018:454 reply

      While RSC as technology is interesting, I don't think it makes much sense in practice.

      I don't want to have a fleet of Node/Bun backend servers that have to render complex components. I'd rather have static pages and/or React SPA with Go API server.

      You get similar result with much smaller resources.

      • By pas 2025-06-0112:03

        It's convenient for integrating with backends. You can use async/await on the server, no need for hooks (callbacks) for data loading.

        It allows for dynamism (user only sees the menus that they have permissions for), you can already show those parts that are already loaded while other parts are still loading.

        (And while I prefer the elegance and clean separation of concerns that come with a good REST API, it's definitely more work to maintain both the frontend and the backend for it. Especially in caes where the backend-for-frontend integrates with more backends.)

        So it's the new PHP (with ob_flush), good for dashboards and big complex high-traffic webshop-like sites, where you want to spare no effort to be able to present the best options to the dear customer as soon as possible. (And also it should be crawlable, and it should work on even the lowest powered devices.)

      • By ec109685 2025-06-027:39

        How do you avoid having your users stare at spinners while their browser makes api calls (some of them depending on each other) in order to render the page?

      • By presentation 2025-06-0212:21

        That's fine for you, but not all React users are you. It makes much sense in practice for me.

      • By robertoandred 2025-06-0118:46

        RSCs work just fine with static deployments and SPAs. (All Next sites are SPAs.)

    • By MaxBav 2025-06-0113:57

      Every use case has its optimal stack. Isomorphic rendering (NextJS, Nuxt, Sveltekit with "non-static" adapters, ...) is a good fit for very few use cases only.

      Many "thought leaders" still don't get the math right. At first visit, your Next app can't serve individual content. So you need two round trips. Both are slow as they are typically served from a Node server.

      Serving the app (properly built and bundled, e.g. using Astro or a small and fast SPA like Solid or Svelte) from a CDN and the data from an API is faster at first visit.

      At consecutive visits the Next app can serve the rendered page with individual content. Nice and fast. But the CDN hosted app is still in the browser cache. It's even faster! So also just one request to the backend for the individual data is needed.

      Regarding SEO the arguments for isomorphic rendering are also flawed. If you care about SEO, just create static html (again, Astro makes it very easy) and put it on a CDN. Why should a crawler care about individual content, that isomorphic frameworks can provide? For SEO the response of a anonymous request matters. So just the static content.

      IMO 99% of the use cases are better solved with traditional server rendered MPAs (e.g. Django or ASP.NET MVC), fast SPAs (not React, but Solid, Svelte or Vue) and if SEO and first paint really matter static sites (e.g. Astro).

    • By Garlef 2025-06-017:461 reply

      I think there's a world where you would use the code structuring of RSCs to compile a static page that's broken down into small chunks of html, css, js.

      Basically: If you replace the "$1" placeholders from the article with URIs you wouldn't need a server.

      (In most cases you don't need fully dynamic SSR)

      The big downside is that you'd need a good pipeline to also have fast builds/updates in case of content changes: Partial streaming of the compiled static site to S3.

      (Let's say you have a newspaper with thousands of prerendered articles: You'd want to only recompile a single article in case one of your authors edits the content in the CMS. But this means the pipeline would need to smartly handle some form of content diff)

      • By danabramov 2025-06-018:43

        RSC is perfectly capable of being run at the build-time, which is the default. So that’s not too far from what you’re describing.

    • By kenanfyi 2025-06-017:031 reply

      I find your analysis very good and agree on why companies like Vercel are pushing hard on RSC.

      • By foo42 2025-06-017:242 reply

        [flagged]

        • By tomhow 2025-06-017:52

          Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.

        • By kenanfyi 2025-06-0215:02

          Sorry, what? Is it just my phrasing or my rant on VC-backed entities pushing things to gain advantage?

    • By chamomeal 2025-06-0215:29

      Tangent: next.js is pretty amazing but it’s still surprising to me that it’s become to default way to write react. I just don’t enjoy writing next.js apps even though typescript is my absolute favorite language, and I generally love react as well.

    • By presentation 2025-06-0212:21

      for what it's worth I am a NextJS developer and everyone on my team had a pretty easy time getting used to client/server components.

      Do I wish that it were something like some kind of Haskell-style monad (probably doable in TypeScript!) or a taint or something, rather than a magic string comment at the top of the file? Sure, but it still doesn't seem to be a big deal, at least on my team.

  • By hyfgfh 2025-06-015:176 reply

    The thing I have seem in performance is people trying to shave ms loading a page, while they fetch several mbs and do complex operations in the FE, when in the reality writing a BFF, improving the architecture and leaner APIs would be a more productive solution.

    We tried to do that with GraphQL, http2,... And arguably failed. Until we can properly evolve web standards we won't be able to fix the main issue. Novel frameworks won't do it either

    • By danabramov 2025-06-018:251 reply

      RSC, which is described at the end of this post, is essentially a BFF (with the API logic componentized). Here’s my long post on this topic: https://overreacted.io/jsx-over-the-wire/ (see BFF midway in the first section).

      • By MaxBav 2025-06-0114:05

        But with a considerable amount of added complexity and bulk. And operational drawbacks. A well designed API (Go, ASP.NET, Java) and a fast SPA (let's say Solid) without client side global data management, just per component data fetching, are simple and fast. You can use a CDN to cache not only the app but the data.

    • By onion2k 2025-06-016:203 reply

      Doesn't that depend on what you mean by "shave ms loading a page"?

      If you're optimizing for time to first render, or time to visually complete, then you need to render the page using as little logic as possible - sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed.

      If you want to speed up time to first input or time to interactive you need to actually build a working page using user data, and that's often fastest on the backend because you reduce network calls which are the slowest bit. I'd argue most users actually prefer that, but it depends on the app. Something like a CRUD SAAS app is probably best rendered server side, but something like Figma is best off sending a much more static page and then fetching the user's design data from the frontend.

      The idea that there's one solution that will work for everything is wrong, mainly because what you optimise for is a subjective choice.

      And that's before you even get to Dev experience, team topology, Conway's law, etc that all have huge impacts on tech choices.

      • By MrJohz 2025-06-018:432 reply

        > sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed

        This is often repeated, but my own experience is the opposite: when I see a bunch of skeleton loaders on a page, I generally expect to be in for a bad experience, because the site is probably going to be slow and janky and cause problems. And the more the of the site is being skeleton-loaded, the more my spirits worsen.

        My guess is that FCP has become the victim of Goodhart's Law — more sites are trying to optimise FCP (which means that _something_ needs to be on the screens ASAP, even if it's useless) without optimising for the UX experience. Which means delaying rendering more and adding more round-trips so that content can be loaded later on rather than up-front. That produces sites that have worse experiences (more loading, more complexity), even though the metric says the experience should be improving.

        • By PhilipRoman 2025-06-0113:521 reply

          It also breaks a bunch of optimizations that browsers have implemented over the years. Compare how back/forward history buttons work on reddit vs server side rendered pages.

          • By MrJohz 2025-06-0114:511 reply

            It is possible to get those features back, in fairness... but it often requires more work than if you'd just let the browser handle things properly in the first place.

            • By zelphirkalt 2025-06-028:47

              Seems like 95% of businesses are not willing to pay the web dev who created the problem in the first place to also fix the problem, and instead want more features released last week.

              The number of websites needlessly forced into being SPAs without working navigation like back and forth buttons is appalling.

        • By Bjartr 2025-06-0111:40

          > the experience should be improving

          I think it's more the bounce rate is improving. People may recall a worse experience later, but more will stick around for that experience if they see something happen sooner.

      • By motorest 2025-06-016:56

        > If you're optimizing for time to first render, or time to visually complete, then you need to render the page using as little logic as possible - sending an empty skeleton that then gets hydrated with user data over APIs is fastest for a user's perception of loading speed.

        I think that OP's point is that these optimization strategies are completely missing the elephant in the room. Meaning, sending multi-MB payloads creates the problem, and shaving a few ms here and there with more complexity while not looking at the performance impact of having to handle multi-MB payloads doesn't seem to be an effective way to tackle the problem.

      • By FridgeSeal 2025-06-021:221 reply

        > speed up time to first input or time to interactive you need to actually build a working page using user data, and that's often fastest on the backend because you reduce network calls which are the slowest bit.

        It’s only fastest to get the loading skeleton onto the page.

        My personal experience with basically any site that has to go through this 2-stage loading exercise is that:

        - content may or may not load properly.

        - I will probably be waiting well over 30 seconds for the actually-useful-content.

        - when it does all load, it _will_ be laggy and glitchy. Navigation won’t work properly. The site may self-initiate a reload, button clicks are…50/50 success rate for “did it register, or is it just heinously slow”.

        I’d honestly give up a lot of fanciness just to have “sites that work _reasonably_” back.

        • By zelphirkalt 2025-06-028:56

          30s is probably an exaggeration even for most bad websites, unless you are on a really poor connection. But I agree with the rest of it. Often it isn't even a 2-stages thing but an n-stages thing that happens there.

    • By xiphias2 2025-06-015:401 reply

      At least this post explains why when I load a Facebook page the only thing that really matters (the content) is what loads last

      • By globalise83 2025-06-0119:53

        When I load a Facebook page the content that matters doesn't even load.

    • By kristianp 2025-06-015:331 reply

      What's a BFF in this context? Writing an AI best friend isn't all that rare these days...

      • By continuational 2025-06-015:411 reply

        BFF (pun intended?) in this context means "backend for frontend".

        The idea is that every frontend has a dedicated backend with exactly the api that that frontend needs.

        • By zelphirkalt 2025-06-029:11

          It is a terrible idea organizationally. It puts backend devs at the whims of often hype train and CV driven development of frontend devs. What often happens is, that complexity is moved from the frontend to the backend. But that complexity is not necessarily implicit, but often self inflicted accidental complexity by choices in frontend. The backend API should facilitate getting the required data to render pages and perform required operations to interact with that data. Everything else is optimization that one may or may not need.

    • By presentation 2025-06-0212:23

      One huge point of RSC is that you can use your super heavyweight library in the backend, and then not send a single byte of it to the frontend, you just send its output. It's a huge win in the name of shaving way more than ms from your page.

      One example a programmer might understand - rather than needing to send the grammar and code of a syntax highlighter to the frontend to render formatted code samples, you can keep that on the backend, and just send the resulting HTML/CSS to the frontend, by making sure that you use your syntax highlighter in a server component instead of a client component. All in the same language and idioms that you would be using in the frontend, with almost 0 boilerplate.

      And if for some reason you decide you want to ship that to the frontend, maybe because you want a user to be able to syntax highlight code they type into the browser, just make that component be a client component instead of a server component, et voila, you've achieved it with almost no code changes.

      Imagine what work that would take if your syntax highlighter was written in Go instead of JS.

    • By elcomet 2025-06-018:432 reply

      Too many acronyms, what's FE, BFF?

      • By aeinbu 2025-06-019:49

        I was asking the same questions.

        - FE is short for the Front End (UI)

        - BFF is short for Backend For Frontend

      • By holoduke 2025-06-019:44

        Front end and a backend for a frontend. In which you generally design apis specific for a page by aggregating multiple other apis, caching, transforming etc.

HackerNews