Ancient X11 scaling technology

2025-06-2418:58279239flak.tedunangst.com

People keep telling me that X11 doesn’t support DPI scaling, or fractional scaling, or multiple monitors, or something. There’s nothing you can do to make it work. I find this surprising. Why doesn’t…

People keep telling me that X11 doesn’t support DPI scaling, or fractional scaling, or multiple monitors, or something. There’s nothing you can do to make it work. I find this surprising. Why doesn’t it work? I figure the best way to find out is try the impossible and see how far we get.

I’m just going to draw a two inch circle on the screen. This screen, that screen, any screen, the circle should always be two inches. Perhaps not the most exciting task, but I figure it’s isomorphic to any other scaling challenge. Just imagine it’s the letter o or a button we wish to draw at a certain size.

I have gathered around me a few screens of different sizes and resolutions. My laptop screen, and then a bit to the right a desktop monitor, and then somewhere over that way a nice big TV. Specifically:

$ xrandr | grep \ connected
eDP connected primary 2880x1800+0+0 (normal left inverted right x axis y axis) 302mm x 189mm
DisplayPort-0 connected 2560x1440+2880+0 (normal left inverted right x axis y axis) 590mm x 334mm
DisplayPort-1 connected 3840x2160+5440+0 (normal left inverted right x axis y axis) 1600mm x 900mm

I think I just spoiled the ending, but here we go anyway.

I’m going to draw the circle with OpenGL, using a simple shader and OBT. There’s a bunch of not very exciting code to create a window and a GLX context, but eventually we’re going to be looking at the shader. This may not be the best way to draw a circle, but it’s my way. For reference, the full code is in circle.c.

void main()
{ float thick = radius / 10; if (abs(center.y - gl_FragCoord.y) < thick/2) thick = 2; float pi = 3.14159; float d = distance(gl_FragCoord.xy, center); float angle = atan(gl_FragCoord.y - center.y, gl_FragCoord.x - center.x); angle /= 2 * pi; angle += 0.5; angle += 0.25; if (angle > 1.0) angle -= 1.0; float amt = (thick - abs(d - radius)) / thick; if (d < radius + thick && d > radius - thick) fragment = vec4(rgb(angle)*amt, 1.0); else discard;
}

I got a little carried away and made a pretty color wheel instead of a flat circle.

The key variable is radius which tells us how many pixels from the center the circle should be. But where does the shader get this from?

    glUniform1f(0, radius);

Okay, but seriously. We listen for configure events. This is the X server telling us our window has been moved or resized. Something has changed, so we should figure out where we are and adjust accordingly.

 case ConfigureNotify: { XConfigureEvent *xev = (void *)&ev; int x = xev->x; for (int i = 0; i < 16; i++) { if (x >= screen_x[i] && x - screen_x[i] < screen_w[i]) { float r = screen_w[i] / screen_mm[i] * 25.4; if (r != radius) { radius = r; } break; } } width = xev->width; height = xev->height; }

Getting closer. The numbers we need come from the X server.

 XRRScreenResources *res = XRRGetScreenResourcesCurrent(disp, root); float screen_mm[16] = { 0 }; float screen_w[16] = { 0 }; float screen_x[16] = { 0 }; int j = 0; for (int i = 0; i < res->noutput; i++) { XRROutputInfo *info = XRRGetOutputInfo(disp, res, res->outputs[i]); screen_mm[j++] = info->mm_width; } j = 0; for (int i = 0; i < res->ncrtc; i++) { XRRCrtcInfo *info = XRRGetCrtcInfo(disp, res, res->crtcs[i]); screen_w[j] = info->width; screen_x[j++] = info->x; }

It’s somewhat annoying that physical width and virtual width are in different structures, and we have to put the puzzle back together, but there it is.

Some more code to handle expose events, the draw loop, etc., and that’s it. A beautiful circle sized just right. Drag it over onto the next monitor, and it changes size. Or rather, it maintains its size. Send it over to the next monitor, and same as before.

Time for the visual proof. A nice pretty circle on my laptop. Another circle on my monitor. And despite the 4K resolution, a somewhat pixely circle on my TV. Turns out the hardest part of this adventure was trying to hold an uncooperative tape measure in place with one hand while trying to get a decent, or not, photo with the other.


Read the original article

Comments

  • By pedrocr 2025-06-2419:1712 reply

    That's probably better than most scaling done on Wayland today because it's doing the rendering directly at the target resolution instead of doing the "draw at 2x scale and then scale down" dance that was popularized by OSX and copied by Linux. If you do it that way you both lose performance and get blurry output. The only corner case a compositor needs to cover is when a client is straddling two outputs. And even in that case you can render at the higher size and get perfect output in one output and the same downside in blurryness in the other, so it's still strictly better.

    It's strange that Wayland didn't do it this way from the start given its philosophy of delegating most things to the clients. All you really need to do arbitrary scaling is tell apps "you're rendering to a MxN pixel buffer and as a hint the scaling factor of the output you'll be composited to is X.Y". After that the client can handle events in real coordinates and scale in the best way possible for its particular context. For a browser, PDF viewer or image processing app that can render at arbitrary resolutions not being able to do that is very frustrating if you want good quality and performance. Hopefully we'll be finally getting that in Wayland now.

    • By kccqzy 2025-06-2419:354 reply

      > doing the "draw at 2x scale and then scale down" dance that was popularized by OSX

      Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale. The earliest retina MacBook Pro in 2012 for example was 2x in both width and height of the earlier non-retina MacBook Pro.

      Eventually I guess the cost of the hardware made this too hard. I mean for example how many different SKUs are there for 27-inch 5K LCD panels versus 27-inch 4K ones?

      But before Apple committed to integer scaling factors and then scaling down, it experimented with more traditional approaches. You can see this in earlier OS X releases such as Tiger or Leopard. The thing is, it probably took too much effort for even Apple itself to implement in its first-party apps so Apple knew there would be low adoption among third party apps. Take a look at this HiDPI rendering example in Leopard: https://cdn.arstechnica.net/wp-content/uploads/archive/revie... It was Apple's own TextEdit app and it was buggy. They did have a nice UI to change the scaling factor to be non-integral: https://superuser.com/a/13675

      • By pedrocr 2025-06-2420:582 reply

        > Originally OS X defaulted to drawing at 2x scale without any scaling down because the hardware was designed to have the right number of pixels for 2x scale.

        That's an interesting related discussion. The idea that there is a physically correct 2x scale and fractional scaling is a tradeoff is not necessarily correct. First because different users will want to place the same monitor at different distances from their eyes, or have different eyesight, or a myriad other differences. So the ideal scaling factor for the same physical device depends on the user and the setup. But more importantly because having integer scaling be sharp and snapped to pixels and fractional scaling a tradeoff is mostly a software limitation. GUI toolkits can still place all ther UI at pixel boundaries even if you give them a target scaling of 1.785. They do need extra logic to do that and most can't. But in a weird twist of destiny the most used app these days is the browser and the rendering engines are designed to output at arbitrary factors natively and in most cases can't because the windowing system forces these extra transforms on them. 3D engines are another example, where they can output whatever arbitrary resolution is needed but aren't allowed to. Most games can probably get around that in some kind of fullscreen mode that bypasses the scaling.

        I think we've mostly ignored these issues because computers are so fast and monitors have gotten so high resolution that the significant performance penalty (2x easily) and introduced blurryness mostly goes unnoticed.

        > Take a look at this HiDPI rendering example in Leopard

        That's a really cool example, thanks. At one point Ubuntu's Unity had a fake fractional scaling slider that just used integer scaling plus font size changes for the intermediate levels. That mostly works very well from the point of view of the user. Because of the current limitations in Wayland I mostly do that still manually. It works great for single monitor and can work for multiple monitors if the scaling factors work out because the font scaling is universal and not per output.

        • By sho_hn 2025-06-2421:035 reply

          What you want is exactly how fractional scaling works (on Wayland) in KDE Plasma and other well-behaved Wayland software: The scale factor can be something quirky like your 1.785, and the GUI code will generally make sure that things nevertheless snap to the pixel grid to avoid blurry results, as close to the requested scaling as possible. No "extra window system transforms".

          • By pedrocr 2025-06-2421:152 reply

            That's what I referred to with "we'll be finally getting that in Wayland now". For many years the Wayland protocol could only communicate integer scale factors to clients. If you asked for 1.5 what the compositors did was ask all the clients to render at 2x at a suitably fake size and then scale that to the final output resolution. That's still mostly the case in what's shipping right now I believe. And even in integer scaling things like events are sent to clients in virtual coordinates instead of just going "here's your NxM buffer, all events are in those physical coordinates, all scaling is just metadata I give you to do whatever you want with". There were practical reasons to do that in the beginning for backwards compatibility but the actual direct scaling is having to be retrofitted now. I'll be really happy when I can just set 1.3 scaling in sway and have that just mean that sway tells Firefox that 1.3 is the scale factor and just gets back the final buffer that doesn't need any transformations. I haven't checked very recently but it wasn't possible not too long ago. If it is now I'll be a happy camper and need to upgrade some software versions.

            • By sho_hn 2025-06-2421:362 reply

              In KDE Plasma we've supported the way you like for quite some years, because Qt is a cross-platform toolkit that supported fractional on e.g. Windows already and we just went ahead and put the mechanisms in place to make use of that on Wayland.

              The standardized protocols are more recent (and of course we heavily argued for them).

              Regarding the way the protocol works and something having to be retrofitted, I think you are maybe a bit confused about the way the scale factor and buffer scale work on wl_output and wl_surface?

              But in any case, yes, I think the happy camper days are coming for you! I also find the macOS approach attrocious, so I appreciate the sentiment.

              • By pedrocr 2025-06-2421:531 reply

                Thanks! By retrofitting I mean having to have a new protocol with this new opt-in method where some apps will be getting integer scales and go through a transform and some apps will be getting a fractional scale and rendering directly to the output resolution. If this had worked "correctly" from the start the compositors wouldn't even need to know anything about scaling. As far as they knew the scaling metadata could have been an opaque value that they passed from the user config to the clients to figure out. I assume we're stuck forever with all compositors having to understand all this instead of just punting the problem completely to clients.

                When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution? ~4 years ago I was frustrated by this when I benchmarked a 2x slowdown from RAW file to the same number of pixels on screen when using fractional scaling and at least in sway there wasn't a way to fix it or much appetite to implement it. It's great to see it is mostly in place now and just needs to be enabled by all the stack.

                • By sho_hn 2025-06-2422:46

                  Oh, ok. Yeah, this I agree with, and I think plenty of people do - having integer-only scaling in the core protocol at the start was definitely a regretable oversight and is a wart on things.

                  > When you say you supported this for quite some years was there a custom protocol in KWin to allow clients to render directly to the fractionally scaled resolution?

                  Qt had a bunch of different mechanisms for how you could tell it to use a fractional scale factor, from setting an env var to doing it inside a "platform plugin" each Qt process loads at runtime (Plasma provides one), etc. We also had a custom-protocol-based mechanism (zwp_scaler_dev iirc) that basically had a set_scale with a 'fixed' instead of an 'int'. Ultimately this was all pretty Qt-specific though in practice. To get adoption outside of just our stack a standard was of course needed, I guess what we can claim though is that we were always pretty firm we wanted proper fractional and to put in the work.

              • By atq2119 2025-06-250:25

                Thank you for that. The excellent fractional scaling and multi-monitor support is why I finally switched back to KDE full time (after first switching away during the KDE 3 to 4 mess).

            • By zokier 2025-06-2421:202 reply

              > That's still mostly the case in what's shipping right now I believe

              All major compositors support fractional scaling extension these days which allows pixel perfect rendering afaik, and I believe Qt6 and GTK4 also support it.

              https://wayland.app/protocols/fractional-scale-v1#compositor...

              • By cycomanic 2025-06-2422:014 reply

                That's great, however why do we use a "scale factor" in the first place? We had a perfectly fitting metric in DPI, why can't I set the desired DPI for every monitor, but instead need to calculate some arbitrary scale factor?

                I'm generally a strong wayland proponent and believe it's a big step forward over X in many ways, but some decisions just make me scratch my head.

                • By zokier 2025-06-2422:501 reply

                  DPI (or PPI) is an absolute measurement. Scale factor is intentionally relative. Different circumstances will want to have different scale factor : dpi ratios; most software do not care if certain UI element is exactly x mm in size, but instead just care that their UI element scale matches the rest of the system.

                  Basically scale factor neatly encapsulates things like viewing distance, user eyesight, dexterity, and preference, different input device accuracy, and many others. It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.

                  • By cycomanic 2025-06-251:151 reply

                    I disagree, I don't want a relative metric. You're saying scale factor neatly encapsulates viewing distance, eyesight, preference, but compared to what? Scale is meaningless if I don't have a reference point. If I have two different size monitors you have now created a metric where a scale of 2x means something completely different. So to get things look the same I either have to manually calculate DPI or I have to manually try and error until it looks right. Same thing if I change monitors, I now have to try until I get the desired scale, while if I had DPI I would not have to change a thing.

                    > It is easier to have human say how big/small they want things to be than have gazillion flags for individual attributes and then some complicated heuristics to deduce the scale.

                    I don't understand why I need gazillion flags, I just set desired DPI (instead of scale). But an absolute metric is almost always better than a relative metric, especially if the relative point is device dependent.

                    • By meindnoch 2025-06-259:291 reply

                      What you actually want is not DPI (or PPI, pixels per inch) but PPD (pixels per degree). But that depends on the viewing distance.

                      • By account42 2025-06-2514:24

                        Not even that - my mom and I might sit the same distance from screens of the same size but she will want everything to be scaled larger than I do. Ultimately, it's a preference and not something that should strictly match some objective measurement.

                • By sho_hn 2025-06-2422:402 reply

                  The end-user UIs don't ask you to calculate anything. Typically they have a slider from 100% to, say, 400% and let you set this to something like 145%.

                  This may take some getting used to if you're familiar with DPI and already know the value you like, but for non-technical users it's more approachable. Not everyone knows DPI or how many dots they want to their inches.

                  That the 145% is 1.45 under the hood is really an implementation detail.

                  • By cycomanic 2025-06-251:231 reply

                    I don't care about what we call the metric, I argue that a relative metric, where the reference point is device dependent is simply bad design.

                    I challenge you, tell a non-technical user to set two monitors (e.g. laptop and external) to display text/windows at the same size. I will guarantee you that it will take them significant amount of time moving those relative sliders around. If we had an absolute metric it would be trivial. Similarly, for people who regularly plug into different monitors, they would simply set a desired DPI and everywhere they plug into things would look the same instead of having to open the scale menu every time.

                    • By sho_hn 2025-06-251:281 reply

                      I see where you are coming from and it makes sense.

                      I will also say though that in the most common cases where people request mixed scale factor support from us (laptop vs. docked screen, screen vs. TV) there are also other form factor differences such as viewing distance that doesn't make folks want to match DPI, and "I want things bigger/smaller there" is difficult to respond to with "calculate what that means to you in terms of DPI".

                      For the case "I have two 27" monitors side-by-side and only one of them is 4K and I want things to be the same size on them" I feel like the UI offering a "Match scale" action/suggestion and then still offering a single scale slider when it sees that scenario might be a nice approach.

                      • By cycomanic 2025-06-253:592 reply

                        > I see where you are coming from and it makes sense.

                        I actually agree (even though I did not express that in my original post) that DPI is probably not a good "user visible" metric. However, I find that the scaling factor relative to some arbitrary value is inferior in every way. Maybe it comes the fact that we did not have proper fractional scaling support earlier, but we are now in the non-sensical situation that for the same laptop with the same display size (but different resolutions, e.g. one HiDPI one normal), you have very different UI element sizes, simply because the default is now to scale either 100% for normal displays and 200% for HiDPI. Therefore the scale doesn't really mean anything and people just end up adjusting again and again, surely that's even more confusing for non-technical users.

                        > I will also say though that in the most common cases where people request mixed scale factor support from us (laptop vs. docked screen, screen vs. TV) there are also other form factor differences such as viewing distance that doesn't make folks want to match DPI, and "I want things bigger/smaller there" is difficult to respond to with "calculate what that means to you in terms of DPI".

                        From my anecdotal evidence, most (even all) people using a laptop for work, have a the laptop next to the monitor and actually adjust scaling so that elements are similar size. Or the other extreme, they simply take the defaults and complain that one monitor makes all their text super small.

                        But even the people who want things bigger or smaller depending on circumstances, I would argue are better served if the scaling factor is relative to some absolute reference, not the size of the pixels on the particular monitor.

                        > For the case "I have two 27" monitors side-by-side and only one of them is 4K and I want things to be the same size on them" I feel like the UI offering a "Match scale" action/suggestion and then still offering a single scale slider when it sees that scenario might be a nice approach.

                        Considering that we now have proper fractional scaling, we should just make the scale relative to something like 96 DPI, and then have a slider to adjust. This would serve all use cases. We should not really let our designs be governed by choices we made because we could not do proper scaling previously.

                        • By account42 2025-06-2514:36

                          The only place were this is a problem though is the configuration UI though. The display configuration could be changed to show a scale relative to the display size (so 100% on all displays means means sizes match) while the protocol keeps talking to applications in scale relative to the pixel size (so programs don't need to care about DPI and instead just have one scale factor).

                        • By kccqzy 2025-06-2517:18

                          I find that explaining all of the above considerations to the user in a UI is hard. It's better to just let the user pick from several points on a slider for them to see for themselves.

                  • By atq2119 2025-06-250:261 reply

                    Not to mention that only a small fraction of the world uses inches...

                    • By account42 2025-06-2514:37

                      For display (diagonal) sizes inches have become the default unit everywhere I've been to.

                • By MadnessASAP 2025-06-2423:501 reply

                  I'm not privy to what discussions happened during the protocol development. However using scale within the protocol seems more practical to me.

                  Not all displays accurately report their DPI (or can, such as projectors). Not all users, such as myself, know their monitors DPI. Finally the scaling algorithm will ultimately use a scale factor, so at a protocol level that might as well be what is passed.

                  There is of course nothing stopping a display management widget/settings page/application from asking for DPI and then converting it to a scale factor, I just don't known of any that exist.

                  • By cycomanic 2025-06-254:05

                    As I replied to the other poster. I don't think DPI should necessarily be the exposed metric, but I do think that we should use something non device-dependent as our reference point, e.g. make 100% = 96 dpi.

                    I can guarantee that it is surprising to non-technical users (and a source of frustration for technical users) that the scale factor and UI element size can be completely different on two of the same laptops (just a different display resolution which is quite common). And it's also unpredictable which one will have the larger UI elements. Generally I believe UI should have behave as predictably as possible.

                • By Dylan16807 2025-06-253:40

                  > We had a perfectly fitting metric in DPI, why can't I set the desired DPI for every monitor, but instead need to calculate some arbitrary scale factor?

                  Because certain ratios work a lot better than others, and calculating the exact DPI to get those benefits is a lot harder than estimating the scaling factor you want.

                  Also the scaling factor calculation is more reliable.

              • By pedrocr 2025-06-2421:301 reply

                Seems like the support is getting there. I just checked Firefox and it has landed the code but still has it disabled by default. Most users that set 1.5x on their session are probably still getting needless scaling but hopefully that won't last too long.

                • By chrismorgan 2025-06-255:32

                  It landed four years ago, but had debilitating problems. Maybe a year ago when I last tried it, it was just as bad—no movement at all. But now, it seems largely fixed, hooray! Just toggled widget.wayland.fractional-scale.enabled and restarted, and although there are issues with windows not synchronising their scale (my screen is 1.5×; at startup, one of two windows stayed 2×; on new window, windows are briefly 2×; on factor change, sometimes chrome gets stuck at the next integer, probably the same issue), it’s all workaroundable and I can live with it.

                  Ahhhhhhhh… so nice.

          • By enriquto 2025-06-2422:502 reply

            > The scale factor can be something quirky like your 1.785, and the GUI code will generally make sure that things nevertheless snap to the pixel grid to avoid blurry results

            This is horrifying! It implies that, for some scaling factors, the lines of text of your terminal will be of different height.

            Not that the alternative (pretend that characters can be placed at arbitrary sub-pixel positions) is any less horrifying. This would make all the lines in your terminal of the same height, alright, but then the same character at different lines would look different.

            The bitter truth is that fractional scaling is impossible. You cannot simply scale images without blurring them. Think about an alternating pattern of white and black rows of pixels. If you try to scale it to a non-integer factor the result will be either blurry or aliased.

            The good news is that fractional scaling is unnecessary. You can just use fonts of any size you want. Moreover, nowadays pixels are so small that you can simply use large bitmap fonts and they'll look sharp, clean and beautiful.

            • By kccqzy 2025-06-250:291 reply

              > The bitter truth is that fractional scaling is impossible.

              That's overly prescriptive in terms of what users want. In my experience users who are used to macOS don't mind slightly blurred text. And users who are traditionalists and perhaps Windows users prefer crisper text at the expense of some height mismatches. It's all very subjective.

              • By jcelerier 2025-06-2511:041 reply

                > In my experience users who are used to macOS don't mind slightly blurred text.

                It always makes me laugh when apple users say "oh it's become of the great text rendering!"

                The last time text rendering was any good on MacOS was on MacOS 9, since then it's been a blurry mess.

                That said, googling for "MacOS blurry text" yields pages and pages and pages of people complaining so I am not sure it is that subjective, simply that some people don't even know how good-looking text can look even on a large 1080p monitor

                • By kccqzy 2025-06-2514:02

                  You can only search for complaints because those who enjoy it are the silent majority. You can however also search for pages and pages of discussions and tools to bring Mac style text rendering to Windows including the MacType tool. It is very much subjective.

                  "Great text rendering" is also highly subjective mind you. To me greatness means strong adherence to the type face's original shape. It doesn't mean crispness.

            • By sho_hn 2025-06-2422:541 reply

              The way it works for your terminal emulator example is that it figures out what makes sense to do for a value of 1.785, e.g. rasterizing text appropriately and making sure that line heights and baselines are at sensible consistent values.

              • By enriquto 2025-06-2422:591 reply

                the problem is that there's no reasonable thing to do when the height of the terminal in pixels is not an integer multiple of the height of the font in pixels. Whatever "it" does, will be wrong.

                (And when it's an integer multiple, you don't need scaling at all. You just need a font of that exact size.)

                • By sho_hn 2025-06-2423:021 reply

                  You're overthinking things a bit and are also a bit confused about how font sizes work and what "scaling" means in a windowing system context. You are thinking taking a bunch of pixels and resampling. In the context we're talking about "scaling" means telling the software what it's expected to output and giving it an opportunity to render accordingly.

                  The way the terminal handles the (literal) edge case you mention is no different from any other time its window size is not a multiple of the line height: It shows empty rows of pixels at the top or bottom.

                  Fonts are only a "exact size" if they're bitmap-based (and when you scale bitmap fonts you are indeed in for sampling difficulties). More typical is to have a font storing vectors and rasterizing glyphs to to the needed size at runtime.

                  • By bscphil 2025-06-251:061 reply

                    Given that the context here is talking about terminals, they probably are literally thinking in terms of bitmap based rendering with integer scaling.

                    • By sho_hn 2025-06-251:24

                      Right, but most users of terminal emulators typically don't use bitmap fonts anymore and haven't for quite some time (just adding this for general clarity, I'm sure you know it).

          • By chrismorgan 2025-06-255:121 reply

            > The scale factor can be something quirky like your 1.785

            Actually, you can’t have exactly 1.785: the scale is a fraction with denominator 120 <https://wayland.app/protocols/fractional-scale-v1#wp_fractio...>. So you’ll have to settle for 1.783̅ or 1.7916̅.

            • By sho_hn 2025-06-256:56

              Aye, the "like" was doing a lot of heavy lifting in that sentence intentionally :).

              But it's HN, so I appreciate someone linking the actual business!

          • By nextaccountic 2025-06-2518:52

            What is the status of fractional pixels in GTK? Will GTK5 finally get what KDE/Qt has today?

            I recall the issue is that GTK bakes deep down the fact that pixel scaling is done in integers, while in Qt they are in floats

          • By 0x457 2025-06-2422:541 reply

            Is it actually in Wayland or is it "implementation should handle it somehow" like most of wayland? Because what is probably 90% of wayland install base only supports communicating integer scales to clients.

            • By sho_hn 2025-06-2422:561 reply

              It's in Wayland in the same way everything else is, i.e. fractional scaling is now a protocol included in the standard protocol suite.

              > Because what is probably 90% of wayland install base only supports communicating integer scales to clients.

              As someone shipping a couple of million cars per year running Wayland, the install base is a lot bigger than you think it is :)

              • By 0x457 2025-06-2423:211 reply

                Hmmm, sorry, but I don't care about install base of wayland in a highly controlled environment (how many different monitor panels you ship is probably less amount of displays with different DPI in my living room right now).

                • By sho_hn 2025-06-2423:24

                  90% is still nonsense even in desktop Linux, tho.

        • By astrange 2025-06-2421:241 reply

          > But more importantly because having integer scaling be sharp and snapped to pixels and fractional scaling a tradeoff is mostly a software limitation. GUI toolkits can still place all ther UI at pixel boundaries even if you give them a target scaling of 1.785. They do need extra logic to do that and most can't.

          The reason Apple started with 2x scaling is because this turned out to not be true. Free-scaling UIs were tried for years before that and never once got to acceptable quality. Not if you want to have image assets or animations involved, or if you can't fix other people's coordinate rounding bugs.

          Other platforms have much lower standards for good-looking UIs, as you can tell from eg their much worse text rendering and having all of it designed by random European programmers instead of designers.

          • By zozbot234 2025-06-2422:072 reply

            > Free-scaling UIs were tried for years before that and never once got to acceptable quality.

            The web is a free-scaling UI, which scales "responsively" in a seamless way from feature phones with tiny pixelated displays to huge TV-sized ultra high-resolution screens. It's fine.

            • By roca 2025-06-255:251 reply

              You are correct. I worked on this for years at Mozilla. See https://robert.ocallahan.org/2007/02/units-patch-landed_07.h... and https://robert.ocallahan.org/2014/11/relax-scaling-user-inte... for example. Some of the problems were pretty hard but the Web ended up in a pretty good place --- Web developers pretty much don't think about whether scaling factors are fractional or not, and things just work... well enough that some people don't even know the Web is "free-scaling UI"!

              • By account42 2025-06-2514:431 reply

                It mostly works but you can still run into issues when you e.g. want to have an element size match the border of another. Things like that that used to work don't anymore due to the tricks needed to make fractional scaling work well enough for other uses.

                • By kccqzy 2025-06-2517:23

                  Why wouldn't it work? The border-size accepts the same kind of length units as height or width, no?

            • By astrange 2025-06-2422:312 reply

              That's actually a different kind of scaling. The one at issue here is closer to cmd-plus/minus on desktop browsers, or two-finger zooming on phones. It's hard to make that look good unless you only have simple flat UIs like the one on this website.

              They did make another attempt at it for apps with Dynamic Type though.

              • By atq2119 2025-06-250:381 reply

                I'm certain that web style scaling is what the vast majority of desktop users actually want from fractional desktop scaling.

                Thinking that two finger zooming style scaling is the goal is probably the result of misguided design-centric thinking instead of user-centric thinking.

                • By rusk 2025-06-256:121 reply

                  > misguided design-centric thinking

                  More like “let the device driver figure it out” - Apple is after all a hardware company first.

                  • By atq2119 2025-06-2516:08

                    In terms of how its business works, Apple is primarily a fashion company.

                    A deeply technical one, yes, but that's not what drives their decision making.

              • By account42 2025-06-2514:47

                User scale and device scale are combined into one scale factor as far as the layout / rendering engine is concerned and thus are solved in the same way.

      • By trinix912 2025-06-2510:551 reply

        Out of curiosity, do you happen to know why Apple thought that would be the cause for low adoption among 3rd party apps? Isn't scaling something that the OS should handle, that should be completely transparent, something that 3rd party devs can forget exists at all? Was it just that their particular implementation required apps to handle things manually?

        • By kccqzy 2025-06-2514:01

          I can only offer a hypothesis. Historically UI sizing was done in pixels, which means they are always integers. When developers support fractional scaling they can either update the app to do all calculations in floating point and store all intermediate results in floating point. That's hard. Or they could do calculations in floating point but round to integers eagerly. That results in inconsistent spacing and other layout bugs.

          With 2x scaling there only needs to be points and pixels which are both integers. Developers' existing code dealing with pixels can usually be reinterpreted to mean points, with only small changes needed to convert to and from pixels.

          With the 2x-and-scale-down approach the scaling is mostly done by the OS and using integer scaling makes this maximally transparent. The devs usually only need to supply higher resolution artwork for icons etc. This means developers only need to support 1x and 2x, not a continuum between 1.0 and 3.0.

      • By frizlab 2025-06-258:09

        Completely unrelated but man was Aqua beautiful

      • By cosmic_cheese 2025-06-2421:11

        Even today you run into the occasional foreign UI toolkit app that only renders at 1x and gets scaled up. We’re probably still years out from all desktop apps handling scaling correctly.

    • By ndiddy 2025-06-2420:383 reply

      Wayland has supported X11 style fractional scaling since 2022: https://wayland.app/protocols/fractional-scale-v1 . Both Qt and GTK support fractional scaling on Wayland.

      • By hedora 2025-06-2515:02

        Fractional scaling is the problem, not the solution! It replaces rendering directly at the monitor’s DPI, which is strictly better, and used to be well-supported under Linux.

      • By bscphil 2025-06-251:111 reply

        Rather annoyingly, the compositor support table on this page seems to be showing only the latest version of each compositor (plus or minus a month or two, e.g. it's behind on KWin). I assume support for the protocol predates these versions for the most part? Do you know when the first versions of KDE and Gnome to support the protocol were released? Asking because some folks in this thread have claimed that a large majority of shipped Wayland systems don't support it, and it would be interesting to know if that's not the case (e.g. if Debian stable had support in Qt and GTK applications).

        • By sho_hn 2025-06-251:45

          We first shipped support for wp-fractional-scale-v1 in Plasma 5.27 in early 2023, support for it in our own software vastly improved with Plasma 6 (and Qt 6) however.

    • By resonious 2025-06-252:152 reply

      As someone who just uses Linux but doesn't write compositor code or really know how they work: Wayland supports fractional scaling way better than X11. At least I was unable to get X11 to do 1.5x scale at all. The advice was always "just increase font size in every app you use".

      Then when you're on Wayland using fractional scaling, XWayland apps look very blurry all the time while Wayland-native apps look great.

      • By waldiri 2025-06-258:20

        As a similar kind of user, I set Xft.dpi: 130 in .Xresources.

        If I want to use multiple monitors with different dpis, then I update it on every switch via echoing the above to `xrdb -merge -`, so newly launched apps inherit the dpi of the monitor they were started on.

        Dirty solution, but results are pretty nice and without any blurriness.

      • By 1718627440 2025-06-259:12

        xrandr --output HDMI1 --scale 1.5x1.5

    • By pwnna 2025-06-251:441 reply

      So I don't understand where the meme of the blurry super-resolution based down sampling comes from. If that is the case, what is super-resolution antialiasing[1] then? Images when rendered at higher resolution than downsampled is usually sharper than an image rendered at the downsampled resolution. This is because it will preserve the high frequency component of the signal better. There are multiple other downsampling-based anti-aliasing technique which all will boost signal-to-noise ratio. Does this not work for UI as well? Most of it is vector graphics. Bitmap icons will need to be updated but the rest of UI (text) should be sharp.

      I know people mention 1 pixel lines (perfectly horizontal or vertical). Then they go multiply by 1.25 or whatever and go like: oh look 0.25 pixel is a lie therefore fractional scaling is fake (sway documentation mentions this to this day). This doesn't seem like it holds in practice other than from this very niche mental exercise. At sufficiently high resolution, which is the case for the display we are talking about, do you even want 1 pixel lines? It will be barely visible. I have this problem now on Linux. Further, if the line is draggable, the click zones becomes too small as well. You probably want something that is of some physical dimension which will probably take multiple pixels anyways. At that point you probably want some antialiasing that you won't be able to see anyways. Further, single pixel lines don't have to be exactly the color the program prescribed anyway. Most of the perfectly horizontal and vertical lines on my screen are all grey-ish. Having some AA artifacts will change its color slightly but don't think it will have material impact. If this is the case, then super resolution should work pretty well.

      Then really what you want is something as follows:

      1. Super-resolution scaling for most "desktop" applications.

      2. Give the native resolution to some full screen applications (games, video playback), and possibly give the native resolution of a rectangle on screen to applications like video playback. This avoids rendering at a higher resolution then downsampling which can introduce information loss for these applications.

      3. Now do this on a per-application basis, instead of per-session basis. No Linux DE implements this. KDE implements per-session which is not flexible enough. You have to do it for each application on launch.

      [1]: https://en.wikipedia.org/wiki/Supersampling

      • By c-hendricks 2025-06-255:44

        > So I don't understand where the meme of the blurry super-resolution based down sampling comes from. If that is the case, what is super-resolution antialiasing

        It removes jaggies by using lots of little blurs (averaging)

    • By jdsully 2025-06-2422:251 reply

      Windows tried this for a long time and literally no app was able to make it work properly. I spent years of my life making Excel have a sane rendering model that worked on device independent pixels and all that, but its just really hard for people not to think in raw pixels.

      • By kllrnohj 2025-06-250:47

        And yet every Android app does it just fine :)

        The real answer is just it's hard to bolt this on later, the UI toolkit needs to support it from the start

    • By sho_hn 2025-06-2420:321 reply

      > doing the "draw at 2x scale and then scale down" dance that was popularized by OSX and copied by Linux

      Linux does not do that.

      > It's strange that Wayland didn't do it this way from the start

      It did (initially for integer scale factors, later also for fractional ones, though some Wayland-based environments did it earlier downstream).

      • By maxdamantus 2025-06-250:071 reply

        > Linux does not do that.

        It did (or at least Wayland compositors did).

        > It did

        It didn't.

        I complained about this a few years ago on HN [0], and produced some screenshots [1] demonstrating the scaling artifacts resulting from fractional scaling (1.25).

        This was before fractional scaling existed in the Wayland protocol, so I assume that if I try it again today with updated software I won't observe the issue (though I haven't tried yet).

        In some of my posts from [0] I explain why it might not matter that much to most people, but essentially, modern font rendering already blurs text [2], so further blurring isn't that noticable.

        [0] https://news.ycombinator.com/item?id=32021261

        [1] https://news.ycombinator.com/item?id=32024677

        [2] https://news.ycombinator.com/item?id=43418227

        • By sho_hn 2025-06-251:21

          The "It did" was about the mechanism (Wayland did tell the clients the scale and expected them to render acccordingly). Yes, fractional wasn't in the core protocol at the start, but that wasn't the object of discussion (it was elsewhere, as you can see in the sibling threads that evolved, where I also totally agree this was a huge wart).

    • By wmf 2025-06-2419:322 reply

      None of the toolkits (Motif, Tk, Gtk, Qt, etc.) could handle fractional scaling so if Wayland had taken the easy way out it would break every app.

      • By nixosbestos 2025-06-2420:12

        Except for the fact that Wayland has had a fractional scaling protocol for some time now. Qt implements it. There's some unknown reason that GTK won't pick it up. But anyway, it's definitely there. There's even a beta-level implementation in Firefox, etc.

      • By lostmsu 2025-06-2420:121 reply

        Why is Wayland trying to monkey patch something that's broken elsewhere?

        • By wmf 2025-06-2420:262 reply

          Do you want to be right or do you want to display apps.

          • By trinix912 2025-06-2510:58

            That is right, but if the whole point of Wayland is to fix what X can't, then why not do it right from the start? Things would break anyways. Otherwise it's not really fixing all glaring issues X has.

          • By lostmsu 2025-06-252:55

            How many apps will you display if you don't display them right? Are you ready to tell me poor graphics is not one of the reasons people not use Linux? You won't display apps to the users you lost. Instead Windows will.

    • By hedora 2025-06-2514:53

      I’ll just add that it is much better than fractional scaling.

      I switched to high dpi displays under Linux back in the late 1990’s. It worked great, even with old toolkits like xaw and motif, and certainly with gtk/gnome/kde.

      This makes perfect sense, since old unix workstations tended to have giant (for the time) frame buffers, and CRTs that were custom-built to match the video card capabilities.

      Fractional scaling is strictly worse than the way X11 used to work. It was a dirty hack when Apple shipped it (they had to, because their third party software ecosystem didn’t understand dpi), but cloning the approach is just dumb.

    • By lotharcable 2025-06-2519:33

      It is using OpenGL to draw instead of using X11.

      Which pretty much means that it is using the same code paths and drivers that get used in Wayland.

    • By zozbot234 2025-06-2420:174 reply

      Isn't OS X graphics supposed to be based on Display Postscript/PDF technology throughout? Why does it have to render at 2x and downsample, instead of simply rendering vector-based primitives at native resolution?

      • By kalleboo 2025-06-250:21

        OS X could do it, they actually used to support enabling fractional rendering like this through a developer tool (Quartz Debug)

        There were multiple problems making it actually look good though - ranging from making things line up properly at fractional sizes (e.g. a "1 point line" becomes blurry at 1.25 scale), and that most applications use bitmap images and not vector graphics for their icons (and this includes the graphic primitives Apple used for the "lickable" button throughout the OS.

        edit: I actually have an iMac G4 here so I took some screenshots since I couldn't find any online. Here is MacOS X 10.4 natively rendering windows at fractional sizes: https://kalleboo.com/linked/os_x_fractional_scaling/

        IIRC later versions of OS X than this actually had vector graphics for buttons/window controls

      • By astrange 2025-06-2421:26

        No, CoreGraphics just happened to have drawing primitives similar to PDF.

        Nobody wants to deal with vectors for everything. They're not performant enough (harder to GPU accelerate) and you couldn't do the skeumorphic UIs of the time with them. They have gotten more popular since, thanks to flat UIs and other platforms with free scaling.

      • By qarl 2025-06-2422:241 reply

        You're thinking of NeXTSTEP. Before OS X.

        • By kergonath 2025-06-250:17

          NeXTSTEP was Display PostScript. MacOS X uses Display PDF since way back in the developer previews.

      • By wmf 2025-06-2420:362 reply

        No, I think integer coordinates are pervasive in Carbon and maybe even Cocoa. To do fractional scaling "properly" you need to use floating point coordinates everywhere.

        • By kalleboo 2025-06-2423:59

          Cocoa/Quartz 2D/Core Graphics uses floating-point coordinates everywhere and drawing is resolution-independent (e.g., the exact same drawing commands are used for screen vs print). Apple used to tout OS X drawing was "based on PDF" but I think that only meant it had the same drawing primitives and could be captured in a PDF output context.

          QuickDraw in Carbon was included to allow for porting MacOS 9 apps, was always discouraged, and is long gone today (it was never supported in 64-bit).

    • By crest 2025-06-2511:511 reply

      If you did it right you would render the damaged area of each window for each display it's visible on, but that would require more rigerous engineering than our software stacks have.

      • By account42 2025-06-2514:57

        It would also mean that moving the window now either needs to wait for repaint or becomes a hell lot more complicated and still have really weird artifacts.

    • By 6510 2025-06-2421:461 reply

      If the initial picture is large enough the blur from down-scaling isn't so bad. Say 1.3 pixel per pixel vs 10.1 pixels per pixel.

      • By account42 2025-06-2514:58

        That also means you need a 10x as powerful GPU though.

  • By wmf 2025-06-2419:295 reply

    Drawing a circle is kind of cheating. The hard part of scaling is drawing UI elements like raster icons or 1px hairlines to look non-blurry.

    • By okanat 2025-06-2420:495 reply

      And also doing it for multiple monitors with differing scales. Nobody claims X11 doesn't support different DPIs. The problems occur when you have monitors with differing pixel densities.

      At the moment only Windows handles that use case perfectly, not even macOS. Wayland comes second if the optional fractional scaling is implemented by the toolkit and the compositor. I am skeptical of the Linux desktop ecosystem to do correct thing there though. Both server-side decorations and fractional scaling being optional (i.e. requires runtime opt-in from compositor and the toolkit) are missteps for a desktop protocol. Both missing features are directly attributable to GNOME and their chokehold of GTK and other core libraries.

      • By ChocolateGod 2025-06-257:48

        > you have monitors with differing pixel densities. At the moment only Windows handles that use case perfectly

        I have a mixed DPI setup and Windows falls flat (on latest Win 11), the jank when you move a application from one monitor to another as it tells the application to redraw is horrible, and even then it sometimes fails and I end up with a cut oversized application or the app crashes.

        Where as on GNOME Wayland I can resize an application to cover all my monitors and it 'just works' in making them it the same physical size on all even when one monitor is 4K and the others 1440p. There's no jank, no redraw. Yes, there's sometimes artifacting from it downscaling as the app targets the highest DPI and gets downsized by the compositor, but that's okay to me.

      • By Avamander 2025-06-2423:151 reply

        Where does Windows handle it? It's a hodgepodge of different frameworks that often look absolutely abysmal at any scale besides 100%.

        • By okanat 2025-06-250:01

          Every UI framework that runs on Windows has to communicate using Win32 API at the lowest level. Here is the guide: https://learn.microsoft.com/en-us/windows/win32/hidpi/high-d...

          Every GUI application on Windows runs an infinite event loop. In that loop you handle messages like [WM_INPUT](https://learn.microsoft.com/en-us/windows/win32/inputdev/wm-...). With Windows 8, Microsoft added a new message type: [WM_DPICHANGED](https://learn.microsoft.com/en-us/windows/win32/hidpi/wm-dpi...). To not break the existing applications with an unknown message, Windows requires the applications to opt-in. The application needs to report its DPI awareness using the function [SetProcessDpiAwareness](https://learn.microsoft.com/en-us/windows/win32/api/shellsca...). The setting of the DPI awareness state can also be done by attaching an XML manifest file to the .exe file.

          With the message Windows not only provides the exact DPI to render the Window contents for the display but also the size of the window rectangle for the perfect pixel alignment and to prevent weird behavior while switching displays. After receiving the DPI, it is up to application to draw things at that DPI however it desires. The OS has no direct access to dictate how it is drawn but it does provide lots of helper libraries and functions for font rendering and for classic Windows UI elements.

          If the application is using a Microsoft-implemented .NET UX library (WinForms, WPF or UWP), Microsoft has already implemented the redrawing functions. You only need to include manifest file into the .exe resources.

          After all of this implementation, why does one get blurry apps? Because those applications don't opt in to handle WM_DPICHANGED. So, the only option that's left for Windows is to let the application to draw itself at the default DPI and then stretch its image. Windows will map the input messages to the default DPI pixel positions.

          Microsoft does provide a half way between a fully DPI aware app and an unaware app, if the app uses the old Windows resource files to store the UI in the .exe resources. Since those apps are guaranteed to use Windows standard UI elements, Windows can intercept the drawing functions and at least draw the standard controls with the correct DPI. That's called "system aware". Since it is intercepting the application's way of drawing, it may result in weird UI bugs though.

      • By dontlaugh 2025-06-258:291 reply

        I've found the opposite, that only macOS handles that perfectly.

        Windows still breaks in several situations like different size and density monitors, but it's generally good enough.

        Recent Gnome on Wayland does about as well as Windows.

        • By lotharcable 2025-06-2519:30

          Windows is the only platform that tries to do it "correctly" as per the internet peanut gallery.

          And, of course, doing it "wrongly" as per what OS X and Gnome does works a lot better in practice.

      • By axus 2025-06-2421:441 reply

        Speaking of X11 and Windows, any recommended Windows Xservers to add to this StackOverflow post? https://stackoverflow.com/questions/61110603/how-to-set-up-w...

        I hadn't heard of WSLg, vcxsrv was the best I could do for free.

        • By okanat 2025-06-2422:50

          With WSLg, Windows runs a native Wayland server under Windows and it will use Xwayland to display X11 apps. You should be able to use any GUI app without any extra setup. You should double check the environment variables though. Sometimes .bashrc etc. or WSL's systemd support interferes with them.

      • By akdor1154 2025-06-2421:352 reply

        This is exactly right.

        There is no mechanism for the user to specify a per-screen text DPI in X11.

        (Or maybe there secretly is, and i should wait for the author to show us?)

        • By somat 2025-06-251:00

          X11 has had this since day one. However the trade offs to actually employing it are... unfortunate. It leans real hard on the application to actually cross screen boundaries and very few applications were willing to put the work in. so xrandr was invented. which does more of what people want with multiple screens by treating them as parts of one large virtual screen but you loose the per screen dpi.

          http://wok.oblomov.eu/tecnologia/mixed-dpi-x11/

        • By okanat 2025-06-2422:451 reply

          Natively in X11? No. Even with Xrandr. It is no. But you can obtain the display size and then draw things differently using OpenGL but now you're reinventing the display protocol in your drawing engine (which is what GLX is after all but I digress). You need to onboard every toolkit to your protocol.

          • By uecker 2025-06-254:47

            Is this different to Wayland?

    • By phkahler 2025-06-2420:032 reply

      >> The hard part of scaling is drawing UI elements like raster icons or 1px hairlines to look non-blurry.

      And doing so actually using X not OpenGL.

      • By kelnos 2025-06-253:442 reply

        Toolkits don't use X to do much (if any) drawing these days. They all use something like cairo or skia or -- yes -- OpenGL to render offscreen, and then upload to X for display (or in the case of OpenGL, they can also do direct rendering).

        • By lotharcable 2025-06-2519:28

          Yes toolkit authors have realized they have to avoid X11 as much as possible if they want to have good results.

          This one of the major motivations as to why X11 guys decided Wayland was a good idea.

          Because having your display server draw your application's output instead of your application drawing the output is a bad idea.

        • By sprash 2025-06-257:05

          If you use Cairo on X11 rendering automatically happens with the XRender extension. This is a rather efficient wire protocol that supports sub-pixel coordinates, transparency, gradients and more. No off-screen rendering required. (Some of the older gtk2 theme engines worked that way and allowed beautiful UIs with fast remote capabilities.)

      • By kllrnohj 2025-06-250:491 reply

        Yeah this is kinda the big elephant in the room here? They didn't prove what they set out to prove. Yes obviously OpenGL does scaling just fine, the entire point of Wayland is to get the compositor to just being a compositor. They didn't do any scaling with X. They didn't do anything at all with X other than ask it some basic display information.

        • By slackfan 2025-06-251:19

          X shouldn't be displaying anything that isn't a right angle anyway.

          All circular UI elements are haram.

    • By zozbot234 2025-06-2420:53

      That depends on what kind of filtering is used when upscaling those icons. If you use modern resampling filters, you are more likely to get a subtle "oil painting" or "watercolor"-like effect with some very minor ringing effects next to sharp transitions (the effect of correctly-applied antialiasing, with a tight limit on spatial frequencies) as opposed to any visible blur. These filters may be somewhat compute-intensive when used for upscaling the entire screen - but if you only upscale small raster icons or other raster images, and use native-resolution rendering for everything else, that effect is negligible.

    • By DonHopkins 2025-06-251:191 reply

      Ha ha, funny you should mention circles! It's so just much fun filling and stroking arcs and circles correctly with X11. From the horse's mouth:

      https://archive.org/details/xlibprogrammingm01adri/page/144/...

      Xlib Programming Manual and Xlib Reference Manual, Section 6.1.4, pp 144:

      >To be more precise, the filling and drawing versions of the rectangle routines don't draw even the same outline if given the same arguments.

      >The routine that fills a rectangle draws an outline one pixel shorter in width and height than the routine that just draws the outline, as shown in Figure 6-2. It is easy to adjust the arguments for the rectangle calls so that one draws the outline and another fills a completely different set of interior pixels. Simply add 1 to x and y and subtract 1 from width and height. In the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion).

      https://news.ycombinator.com/item?id=11484148

      DonHopkins on April 12, 2016 | parent | context | favorite | on: NeWS – Network Extensible Window System

      >There's no way X can do anti-aliasing, without a ground-up redesign. The rendering rules are very strictly defined in terms of which pixels get touched and how.

      >There is a deep-down irreconcilable philosophical and mathematical difference between X11's discrete half-open pixel-oriented rendering model, and PostScript's continuous stencil/paint Porter/Duff imaging model.

      >X11 graphics round differently when filling and stroking, define strokes in terms of square pixels instead of fills with arbitrary coordinate transformations, and is all about "half open" pixels with gravity to the right and down, not the pixel coverage of geometric region, which is how anti-aliasing is defined.

      >X11 is rasterops on wheels. It turned out that not many application developers enjoyed thinking about pixels and coordinates the X11 way, displays don't always have square pixels, the hardware (cough Microvax framebuffer) that supports rasterops efficiently is long obsolete, rendering was precisely defined in a way that didn't allow any wiggle room for hardware optimizations, and developers would rather use higher level stencil/paint and scalable graphics, now that computers are fast enough to support it.

      >I tried describing the problem in the Unix-Haters X-Windows Disaster chapter [1]:

      >A task as simple as filing and stroking shapes is quite complicated because of X's bizarre pixel-oriented imaging rules. When you fill a 10x10 square with XFillRectangle, it fills the 100 pixels you expect. But you get extra "bonus pixels" when you pass the same arguments to XDrawRectangle, because it actually draws an 11x11 square, hanging out one pixel below and to the right!!! If you find this hard to believe, look it up in the X manual yourself: Volume 1, Section 6.1.4. The manual patronizingly explains how easy it is to add 1 to the x and y position of the filled rectangle, while subtracting 1 from the width and height to compensate, so it fits neatly inside the outline. Then it points out that "in the case of arcs, however, this is a much more difficult proposition (probably impossible in a portable fashion)." This means that portably filling and stroking an arbitrarily scaled arc without overlapping or leaving gaps is an intractable problem when using the X Window System. Think about that. You can't even draw a proper rectangle with a thick outline, since the line width is specified in unscaled pixel units, so if your display has rectangular pixels, the vertical and horizontal lines will have different thicknesses even though you scaled the rectangle corner coordinates to compensate for the aspect ratio.

      [1] The X-Windows Disaster: http://www.art.net/~hopkins/Don/unix-haters/x-windows/disast...

      • By wmf 2025-06-253:501 reply

        I think that stuff was all fixed long ago by Cairo/Skia on XRender.

        • By DonHopkins 2025-06-259:13

          Yes, I agree: Cairo is "really good stuff" as Jim Gettys so modestly puts it! It's one of the best things to come out of X-Windows and the original Xr extension. ("The name Cairo derives from the original name Xr, interpreted as the Greek letters chi and rho.")

          Finally (and for a long time now) it's an independent library, no longer tied into the X server and Xr extension, and there are a lot of wrappers for it, browser and GTK and many other frameworks use it, and it has lots of nice bindings to languages, like pycairo.

          Jim Gettys, one of Cairo's authors and an original X-Windows architect, also worked on the OLPC project and its Sugar user interface framework (designed for making educational apps for kids), which used Cairo via GTK/PyGTK/PyCairo/Pango/Poppler.

          Jim's big cause is that he champions eradicating "Bufferbloat":

          https://en.wikipedia.org/wiki/Bufferbloat

          https://gettys.wordpress.com/2010/12/03/introducing-the-crim...

          I had a great time using it for the Micropolis (open source SimCity) tile rendering engine, which I wrote in C++, then wrapped with David Beazly's SWIG tool as a Python extension, so Python PyGTK apps could pass their existing Cairo rendering context into C++ and it could render at high speed without the Python interpreter in the way, on either windows or bitmaps.

          https://en.wikipedia.org/wiki/SWIG

          The TileEngine is a C++ python module wrapped with SWIG, that uses the Cairo library and knows how to accept a PyGTK Cairo context as a parameter to draw on directly via the api -- Python just passes pointers back and forth between PyGTK by wrangling and unwrangling wrappers around the Cairo context pointer:

          TileEngine: https://github.com/SimHacker/micropolis/tree/master/Micropol...

          tileengine.h: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          tileengine.cpp: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          pycairo.i: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          tileengine-swig-python.i: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          tileengine.i: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          Then you can call the tile engine from Python, and build GTK widgets and apps on top of it like so, and it all runs silky smooth, with pixel perfect tiling and scaling, so you can zoom into the SimCity map, and Python can efficiently draw sprites and overlays on it like Godzilla, tornados, trains, airplanes, helicopters, the cursor, etc:

          tiledrawingarea.py: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          tilewindow.py: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          tiletool.py: https://github.com/SimHacker/micropolis/blob/master/Micropol...

          I've written about Cairo on HN before, sharing some email with Jim about it:

          https://news.ycombinator.com/item?id=20379336

          DonHopkins on July 8, 2019 | parent | context | favorite | on: The death watch for the X Window System has probab...

          Cairo wasn't the library behind the X11 drawing API, it was originally the Xr rendering extension, that was an alternative to the original X11 drawing API.

          https://en.wikipedia.org/wiki/Cairo_(graphics)

          >The name Cairo derives from the original name Xr, interpreted as the Greek letters chi and rho.

          You're right, it doesn't actually make sense to put your drawing functions in the display server any more (at least in the case of X11, which doesn't have an extension language to drive the drawing functions -- but it did make sense for NeWS which also used PostScript as an extension language as well as a drawing API).

          So Cairo rose above X11 and became its own independent library, so it could be useful to clients and toolkits on any window system or hardware.

          https://www.osnews.com/story/3602/xr-x11-cross-device-render...

          https://web.archive.org/web/20030805030147/http://xr.xwin.or...

          https://keithp.com/~keithp/talks/xarch_ols2004/xarch-ols2004...

          Here's some email discussion with Jim Gettys about where Cairo came from:

          From: Jim Gettys <jg@laptop.org> Date: Jan 9, 2007, 11:04 PM

          The day I thought X was dead was the day I installed CDE on my Alpha.

          It was years later I realized the young turks were ignoring the disaster perpetrated by the UNIX vendors in the name of "standardization"; since then, Keith Packard and I have tried to pay for our design mistakes in X by things like the new font model, X Render extension, Composite, and Cairo, while putting stakes in the heart of disasters like XIE, LBX, PEX, the old X core font model, and similar design by committee mistakes (though the broken core 2D graphics and font stuff must be considered "original sin" committed by people who didn't know any better at the time).

          So we've mostly succeeded at dragging the old whale off the beach and getting it to live again.

          From: Don Hopkins <dhopkins@donhopkins.com> Date: Wed, Jan 17, 2007, 10:50 PM

          Cairo looks wonderful! I'm looking forward to using it from Python, which should be lots of fun.

          A lot of that old X11 stuff was thrown in by big companies to shill existing products (like using PEX to sell 3d graphics hardware, by drawing rotating 3-d cubes in an attempt to hypnotize people).

          Remember UIL? I heard that was written by the VMS trolls at DEC, who naturally designed it with an 132 column line length limitation and no pre-processor of course. The word on the street was that DEC threw down the gauntlet and insisted on UIL being included in the standard, even though the rest of the committee hated it for sucking so bad. But DEC threatened to hold their breath until they got their way.

          And there were a lot of weird dynamics around commercial extensions like Display PostScript, which (as I remember it) was used as an excuse for not fixing the font problems a lot earlier: "If you want to do readable text, then you should be using Display PostScript."

          The problem was that Linux doesn't have a vendor to pay the Display PostScript licensing fee to Adobe, so Linux drove a lot of "urban renewal" of problems that had been sidelined by the big blundering companies originally involved with X.

          >So we've mostly succeeded at dragging the old whale off the beach and getting it to live again.

          Hey, that's a lot better than dynamiting the whale, which seemed like a such good idea at the time! (Oh the humanity!)

          https://www.youtube.com/watch?v=AtVSzU20ZGk

          From: Jim Gettys <jg@laptop.org> Date: Jan 17, 2007, 11:41 PM

          > Cairo looks wonderful! I'm looking forward to using it from Python, which should be lots of fun.

          Yup. Cairo is really good stuff. This time we had the benefit of Lyle Ramshaw to get us unstuck. Would that I'd known Lyle in 1986; but it was too late 3 years later when I got to know him.

          https://cairographics.org/bibliography/

          Here's some more of the discussion with Jim about Cairo and X-Windows:

          https://news.ycombinator.com/item?id=7727953

          >In 2007, I apologized to Jim Gettys for the tone of the X-Windows Disaster chapter I wrote for the book, to make sure he had no hard feelings and forgave me for my vitriolic rants and cheap shots of criticism:

          http://www.donhopkins.com/home/catalog/unix-haters/x-windows...

          DH>> I hope you founds it more entertaining than offensive!

          JG> At the time, I remember it hurting; now I find it entertaining. Time cures such things. And Motif was definitely a vendor perpetrated unmitigated disaster: the worst of it was that it "succeeded" in unifying the UNIX gui, which means it succeeded at stopping all reasonable work on gui's on UNIX until the young Linux turks took over.

          JG> And by '93 or so, the UNIX vendors actively wanted no change, as they had given up on the desktop and any innovation would cost them money.

          DH>> The whole "Unix-Haters Handbook" thing was intended to shake up the status quo and inspire people to improve the situation instead of blindly accepting the received view. (And that's what's finally happened, although I can't take the credit, because it largely belongs to Linux -- and now that's the OLPC's mission!)

          DH>> The unix-haters mailing list was a spin-off of its-lovers@mit-ai: in order to qualify for the mailing list you had to post a truly vitriolic no-holds-barred eyeball-popping flame.

          DH>> I hope that helps to explain the tone of "The X-Windows Disaster", which I wrote to blow off steam while I was developing the X11 version of SimCity.

          JG> Yup. I won't hold it against you ;-). Though any operating system with ddt as its shell is downright user hostile...

          JG>>> The day I thought X was dead was the day I installed CDE on my Alpha. [...]

          And more about Pango, the text rendering library on top of Cairo, the OLPC's Sugar user interface, which was built on PyGTK, and the OLPC Read book reader app that used the Cairo-based Poppler PDF rendering library:

          https://en.wikipedia.org/wiki/Poppler_(software)

          https://news.ycombinator.com/item?id=16852148

          >I worked on making the Read activity usable in book mode (keyboard folded away, but gamepad buttons usable), and I vaguely recall putting in an ioctl to put the CPU to sleep after you turned a page, but I'm not sure if my changes made it in. [...]

          >Sugar had a long way to go, and wasn't very well documented. They were trying to do too much from scratch, and choose a technically good but not winning platform. It was trying to be far too revolutionary, but at the same time building on top of layers and layers of legacy stack (X11, GTK, GTK Objects, PyGTK bindings, Python, etc).

          >Sugar was written in Python and built on top of PyGTK, which necessitated buying into a lot of "stuff". On top of that, it used other Python modules and GTK bindings like Cairo for imaging, Pango for text, etc. All great industrial strength stuff. But then it had its own higher level Hippo canvas and user interface stuff on top of that, which never really went anywhere (for good reason: it was complex because it was written for PyGTK in a misshapen mish-mash of Python and C with the GTK object system, instead of pure simple Python code -- hardly what Alan Kay thinks of as "object oriented programming"). And for browser based stuff there were the Python bindings to xulrunner, which just made you yearn for pure JavaScript without all the layers of adaptive middle-ware between incompatible object systems.

          >The problem is that Sugar missed the JavaScript/Web Browser boat (by arriving a bit too early, or actually just not having enough situational awareness). Sugar should have been written in JavaScript and run in any browser (or in an Electron-like shell such as xulrunner). Then it would be like a Chromebook, and it would benefit from the enormous amount of energy being put into the JavaScript/HTML platform. Python and GTK just hasn't had that much lovin'.

          >When I ported the multi player TCL/Tk/X11 version of SimCity to the OLPC, I ripped out the multi player support because it was too low level and required granting full permission to your X server to other players. I intended to eventually reimplement it on top of the Sugar grid networking and multi user activity stuff, but that never materialized, and it would have been a completely different architecture than one X11 client connecting to multiple X11 servers.

          >Then I made a simple shell script based wrapper around the TCL/Tk application, to start and stop it from the Sugar menus. It wasn't any more integrated with Sugar than that. Of course the long term plan was to rewrite it from the ground up so it was scriptable in Python, and took advantage of all the fancy Sugar stuff.

          >But since the Sugar stuff wasn't ready yet, I spent my time ripping out TCL/Tk, translating the C code to C++, wrapping it with SWIG and plugging it into Python, then implementing a pure PyGTK/Cairo user interface, without any Sugar stuff, which would at least be a small step in the direction of supporting Sugar, and big step in the direction of supporting any other platform (like the web).

          [...]

    • By dark-star 2025-06-2420:051 reply

      yeah, exactly. Nobody claimed that it is impossible to determin the physical geometry of your display (but that might be tricky for remote X sessions, I don't know if it would work there too?)

      • By kvemkon 2025-06-2420:11

        > tricky for remote X sessions, I don't know if it would work there too

        The author did exactly this:

        > Even better, I didn’t mention that I wasn’t actually running this program on my laptop. It was running on my router in another room, but everything worked as if

  • By kunzhi 2025-06-2420:211 reply

    Interesting article, I'll admit when I first saw the title I was thinking of a different kind of "scaling" - namely the client/server decoupling in X11.

    I still think X11 forwarding over SSH is a super cool and unsung/undersung feature. I know there are plenty of good reasons we don't really "do it these days" but I have had some good experiences where running the UI of a server app locally was useful. (Okay, it was more fun than useful, but it was useful.)

    • By xioxox 2025-06-2420:462 reply

      It's certainly very useful. I do half my work using X11 over ssh and it works reasonably well over a LAN (at least using emacs, plotting, etc).

      • By inetknght 2025-06-250:382 reply

        "reasonably well" as in... yeah it works. But it's extremely laggy (for comparison, I know people who forwarded DirectX calls over 10Mbit ethernet and could get ~15 frames/sec playing Unreal Tournament in the early 00's), and any network blip is liable to cause a window that you can neither interact with nor forcefully close.

        It felt like a prototype feature that never became production-ready for that reason alone. Then there's all the security concerns that solidify that.

        But yes, it does work reasonably well, and it is actually really cool. I just wish it were... better.

        • By uecker 2025-06-254:53

          It is laggy but not because of protocol limitations but due to Xlib not being able to hide the latency and we never got the proper support from toolkits to do this via XCB. Xpra or other proxys work around this, but it would be nice if toolkits supported this directly. Also reconnect or moving windows between displays would be no problem if toolkits supported this.

        • By welterde 2025-06-2512:31

          For applications that were written with X11 in mind it works much much better than that. One example was the controlling a telescope. The computers in the control room were thin clients pretty much and displayed various windows from various machines across the mountain - even across multiple different operating systems! Some machines were running Solaris and some linux. The different machines belonged to different aspects of the telescope: some controlled the telescope itself and some machines belonged to the different scientifc instruments on the telescope. And it all worked quite well with no real noticeable lag.

      • By DonHopkins 2025-06-251:37

        I worked on the NeWS drivers for Emacs (both "Evil Software Hoarder" Gosling UniPress Emacs 2.20 and later "Free" Gnu Emacs 18), which were extremely efficient and smoothly interactive over low baud rate modems (which we called "thin wire" as opposed to i.e. the "thick wire" coaxial 10BASE5 Ethernet of the time), because instead of using the extraordinarily inefficient, chatty, pong-pongy X-Windows protocol, Emacs could simply download PostScript code to the window server that defined a highly optimized application specific client/server protocol and intelligent front-end (now termed "AJAX"), which performed as much real time interaction in the window system as possible, without any network activity, like popping up and tracking pie menus, and providing real time feedback and autoscroll when selecting and highlighting text.

        For example, both versions of Emacs would download the lengths of each line on the screen when you started a selection, so you could drag and select the text and animation the selection overlay without any network traffic at all, without sending mouse move events over the network, only sending messages when you autoscrolled or released the button.

        http://www.bitsavers.org/pdf/sun/NeWS/800-5543-10_The_NeWS_T... document page 2, pdf page 36:

        >Thin wire

        >TNT programs perform well over low bandwidth client-server connections such as telephone lines or overloaded networks because the OPEN LOOK components live in the window server and interact with the user without involving the client program at all.

        >Application programmers can take advantage of the programmable server in this way as well. For example, you can download user-interaction code that animates some operation.

        UniPress Emacs NeWS Driver:

        https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...

        Selection support with local feedback:

        https://github.com/SimHacker/NeMACS/blob/b5e34228045d544fcb7...

        Gnu Emacs 18 NeWS Driver (search for LocalSelectionStart):

        https://donhopkins.com/home/code/emacs18/src/tnt.ps

        https://news.ycombinator.com/item?id=26113192

        DonHopkins on Feb 12, 2021 | parent | context | favorite | on: Interview with Bill Joy (1984)

        >Bill was probably referring to what RMS calls "Evil Software Hoarder Emacs" aka "UniPress Emacs", which was the commercially supported version of James Gosling's Unix Emacs (aka Gosling Emacs / Gosmacs / UniPress Emacs / Unimacs) sold by UniPress Software, and it actually cost a thousand or so for a source license (but I don't remember how much a binary license was). Sun had the source installed on their file servers while Gosling was working there, which was probably how Bill Joy had access to it, although it was likely just a free courtesy license, so Gosling didn't have to pay to license his own code back from UniPress to use at Sun. https://en.wikipedia.org/wiki/Gosling_Emacs

        >I worked at UniPress on the Emacs display driver for the NeWS window system (the PostScript based window system that James Gosling also wrote), with Mike "Emacs Hacker Boss" Gallaher, who was charge of Emacs development at UniPress. One day during the 80's Mike and I were wandering around an East coast science fiction convention, and ran into RMS, who's a regular fixture at such events.

        >Mike said: "Hello, Richard. I heard a rumor that your house burned down. That's terrible! Is it true?"

        >RMS replied right back: "Yes, it did. But where you work, you probably heard about it in advance."

        >Everybody laughed. It was a joke! Nobody's feelings were hurt. He's a funny guy, quick on his feet!

        In the late 80's, if you had a fast LAN and not a lot of memory and disk (like a 4 meg "dickless" Sun 3/50), it actually was more efficient to run X11 Emacs and even the X11 window manager itself over the LAN on another workstation than on your own, because then you didn't suffer from frequent context switches and paging every keystroke and mouse movement and click.

        The X11 server and Emacs and WM didn't need to context switch to simply send messages over the network and paint the screen if you ran emacs and the WM remotely, so Emacs and the WM weren't constantly fighting with the X11 server for memory and CPU. Context switches were really expensive on a 68k workstation, and the way X11 is designed, especially with its outboard window manager, context switching from ping-ponging messages back and forth and back and forth and back and forth and back and forth between X11 and the WM and X11 and Emacs every keystroke or mouse movement or click or window event KILLED performance and caused huge amounts of virtual memory thrashing and costly context switching.

        Of course NeWS eliminated all that nonsense gatling gun network ping-ponging and context switching, which was the whole point of its design.

        That's the same reason using client-side Google Maps via AJAX of 20 years ago was so much better than the server-side Xerox PARC Map Viewer via http of 32 years ago.

        https://en.wikipedia.org/wiki/Xerox_PARC_Map_Viewer

        Outboard X11 ICCCM window managers are the worst possible most inefficient way you could ever possibly design a window manager, and that's not even touching on their extreme complexity and interoperability problems. It's the one program you NEED to be running in the same context as the window system to synchronously and seamlessly handle events without dropping them on the floor and deadlocking (google "X11 server grab" if you don't get what this means), but instead X11 brutally slices the server and window manager apart like King Solomon following through with his child-sharing strategy.

        https://tronche.com/gui/x/xlib/window-and-session-manager/XG...

        While NeWS not only runs the window manager efficiently in the server without any context switching or network overhead, but it also lets you easily plug in your own customized window frames (with tabs and pie menus), implement fancy features like rooms and virtual scrolling desktops, and all kinds of cool stuff! At Sun were even managing X11 windows with a NeWS ICCCM window manager written in PostScript, wrapping tabbed windows with pie menus around your X-Windows!

        https://donhopkins.com/home/archive/NeWS/owm.ps.txt

        https://donhopkins.com/home/archive/NeWS/win/xwm.ps

        https://www.donhopkins.com/home/catalog/unix-haters/x-window...

HackerNews