I introduced the concept of bandwidth * delay product to the conversation...
The question was about why use dynamic allocation. In this branch of the thread we ere discussing the question "Are there TCP/IP stacks out there in common use that are allocating memory all the time?"
We'd not be happy to see the server or laptop statically reserving this worst-case amount of memory for TCP buffers, when it's not infact slinging around max nr of tcp connections, each with worst-case bandwidth*delay product. Nor would be happy if the laptop or server only supported little tcp windows that limit performance by capping the amount of data in-flight to a low number.
We are happier if the TCP stack dynamically allocates the memory as needed, just like we're happier with dynamic allocation on most other OS functions.
We're up to hundreds of gbps per server, have been for some years now. Eg 400 gbps uses a lot even with much smaller avg rtt. That's not going ng to be one stream of course, but a zillion smaller streams still add up to the same reqs.
This is far from little embedded device territory of course. But still, latest wifi is closer to 10 than 1 gbps already.