XSLT – Native, zero-config build system for the Web

2025-06-275:00357291github.com

Native web build system (XML+XSLT). Contribute to pacocoursey/xslt development by creating an account on GitHub.

XSLT (1999): native, zero-config build system for the Web.

   From: Grug brain Paco
     To: Paco
Subject: XSLT

most static websites created like this

  • data (.json, .md, .txt)
  • build system (Hugo, Next.js, Astro, …)
  • output (static HTML)

me make many website, find build system has much complexity. not understand big project like React Next.js, need many PhD for understand how my markdown blog work

me want remove framework (many grug do), want use simple HTML and CSS, use sacred spec like HTTP URI HTML. but no build system? this mean writing lot lot HTML by hand. when time come for many webpages, need header and footer same same on all pages: copy paste easy for while. but when, in future, there many many many pages? me need better solution

can use HTML import? nope not exist

can use web component? nope need JS and now need JavaScript engine

for a while me think about this problem. many month me work on other projects, work on tool like web browser, UI components, think about other thing in mind, like how gravity and what make good stylesheet.

wait, can use web browser like build system? seem good for data → HTML, it understand many format already like text/html text/markdown application/xml. more months me learn about web browser, what make good URL, how email work, take some break, listen music, play game

learn that spec best place learn all web stuff. discover many good idea in spec, read carefully, click links, use many feature me not know about for make good app experience

one day me working on RSS feed and want make /rss/blog.xml more pretty, not just text, can use stylesheet? yep find spec about XSLT for making pretty XML document

me not really know much about XML document, just using rss npm package and it work, so me find XML spec and read bit…

<?xml version="1.0"?>
<blog> <post id="42" publishedAt="2025-06-26"> <title>Hello XSLT</title> <tags>…</tags> </post>
</blog>

ok is great! it look like HTML but for all data, not just web data. adding stylesheet very natural just new tag

<?xml version="1.0"?>
+ <?xml-stylesheet type="text/xsl" href="blog.xsl"?>
<blog>
  <post id="42" publishedAt="2025-06-26">
    <title>Hello XSLT</title>
    <tags>…</tags>
  </post>
</blog>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="html" indent="yes" /> <xsl:template match="/"> <html> <head> … </head> <body> … </body> </html> </xsl:template>
</xsl:stylesheet>

HTML output!! XSLT = (XML) => HTML. it has many feature need for build system like loop, variable, import, me not even read all yet, much excite. dynamic data come from parent XML document

<head> <xsl:value-of select="title" /> | Blog
</head>

how I can run it? open XML file

finally, web browser become my build system. it not big problem for me store blog data as XML not JSON, it look like HTML, easy parse, flexible, native on web. all web browser support XSLT transform on page visit and display output HTML. it like "client-side" build system, run on user computer, easy to distribute static file, work without JavaScript!!

it not perfect. it not replacement for all thing. but another tool in grug brain web dev toolbelt.

thank you old ideas

thank you specs

thank you web browser, nexus of all


Read the original article

Comments

  • By badmintonbaseba 2025-06-278:379 reply

    I have worked for a company that (probably still is) heavily invested in XSLT for XML templating. It's not good, and they would probably migrate from it if they could.

      1. Even though there are newer XSLT standards, XSLT 1.0 is still dominant. It is quite limited and weird compared to the newer standards.
    
      2. Resolving performance problems of XSLT templates is hell. XSLT is a Turing-complete functional-style language, with performance very much abstracted away. There are XSLT templates that worked fine for most documents, but then one document came in with a ~100 row table and it blew up. Turns out that the template that processed the table is O(N^2) or worse, without any obvious way to optimize it (it might even have an XPath on each row that itself is O(N) or worse). I don't exactly know how it manifested, but as I recall the document was processed by XSLT for more than 7 minutes.
    
    JS might have other problems, but not being able to resolve algorithmic complexity issues is not one of them.

    • By larodi 2025-06-2716:54

      XSLt is not easy. It’s prologue on shrooms so to speak and it has a steep learning curve. Once mastered gives sudoku level satisfaction, but can hardly ever be a standard approach to built or templating as normally people need much less to achieve goals.

      Besides XML is not universally loved.

    • By nithril 2025-06-2711:153 reply

      XSLT/XPath have evolved since XSLT 1.0.

      Features are now available like key (index) to greatly speedup the processing. Good XSLT implementation like Saxon definitively helps as well on the perf aspect.

      When it comes to transform XML to something else, XSLT is quite handy by structuring the logic.

      • By sam_lowry_ 2025-06-2712:112 reply

        Keys were a thing in XSLT 1.x already.

        XSLT 2+ was more about side effects.

        I never really grokked later XSLT and XPath standards though.

        XSLT 1.0 had a steep learning curve, but it was elegant in a way poetry is elegant because of extra restrictions imposed on it compared to prose. You really had to stretch your mind to do useful stuff with it. Anyone remembers Muenchian grouping? It was gorgeous.

        Newer standards lost elegance and kept the ugly syntax.

        No wonder they lost mindshare.

        • By jerf 2025-06-2713:261 reply

          "Newer standards lost elegance and kept the ugly syntax."

          My biggest problem with XSLT is that I've never encountered a problem that I wouldn't rather solve with an XPath library and literally any other general purpose programming language.

          When XSLT was the only thing with XPath you could rely on, maybe it had an edge, but once everyone has an XPath library what's left is a very quirky and restrictive language that I really don't like. And I speak Haskell, so the critic reaching for the reply button can take a pass on the "Oh you must not like functional programming" routine... no, Haskell is included in that set of "literally any other general purpose programming language" above.

          • By yoz 2025-06-2716:592 reply

            Serious question: would it be worth the effort to treat XSLT as a compilation target for a friendlier language, either extant or new?

            There's clearly value in XSLT's near-universal support as a web-native system. It provides templating out of the box without invoking JavaScript, and there's demand for that[1]. But it still lacks decent in-browser debugging which JS has in spades.

            [1] https://justinfagnani.com/2025/06/26/the-time-is-right-for-a...

            • By bokchoi 2025-06-2718:49

              I just posted this in another comment: https://github.com/Juniper/libslax/wiki/Intro

            • By jerf 2025-06-2717:36

              It would at least be an interesting project. If someone put the elbow grease into it it is distinctly possible that an XSLT stylesheet could be not just converted to JS (which is obviously true and just a matter of effort), but converted to something that is at least on the edge of human usable and editable, and some light refactoring away from being decent code.

        • By bokchoi 2025-06-2718:48

          I haven't tried it yet, but I came across this alternate syntax for XSLT which is much more friendly:

          https://github.com/Juniper/libslax/wiki/Intro

          It looks like it was developed by Juniper and has shipped in their routers?

      • By echelon 2025-06-2713:404 reply

        XSLT just needs a different, non-XML serialization.

        XML (the data structure) needs a non-XML serialization.

        Similar to how Semantic Web's Owl has four different serializations, only one of them being the XML serialization. (eg. Owl can be represented in Functional, Turtle, Manchester, Json, and N-triples syntaxes.)

        • By bokchoi 2025-06-2718:50

          I just posted this in another comment: https://github.com/Juniper/libslax/wiki/Intro

        • By marcosdumay 2025-06-2718:19

          > XML (the data structure) needs a non-XML serialization.

          KDL is a very interesting attempt, but my impression is that people are already trying to shove way too much unnecessary complexity into it.

          IMO, the KDL's document transformation is not a really good example of a better XSLT, tough. I mean, it's better, but it probably can still be improved a lot.

        • By jimbokun 2025-06-2718:02

          You're looking for S-expressions.

        • By alganet 2025-06-2714:17

          > XML (the data structure) needs a non-XML serialization.

          That's YAML, and it is arguibly worse. Here's a sample YAML 1.2 document straight from their spec:

              %TAG !e! tag:example.com,2000:app/
              ---
              - !local foo
              - !!str bar
              - !e!tag%21 baz
          
          Nightmare fuel. Just by looking at it, can you tell what it does?

          --

          Some notes:

          - SemWeb also has JSON-LD serialization. It's a good compromise that fits modern tooling nicely.

          - XML is still a damn good compromise between human readable and machine readable. Not perfect, but what is perfect anyway?

          - HTML5 is now more complex than XHTML ever was (all sorts of historical caveats in this claim, I know, don't worry).

          - Markup beauty is relative, we should accept that.

      • By thechao 2025-06-2713:192 reply

        Can you name a non-Saxon XSLT processor? I'd really like one. Preferably, open-source.

    • By agumonkey 2025-06-279:581 reply

      It's odd cause xslt was clearly made in an era where expecting long source xml to be processed was the norm, and nested loops would blow up obviously..

      • By j16sdiz 2025-06-2710:391 reply

        It was in the era when everything walk on the DOM tree, not streams.

        Streaming is not supported until later version.

        • By agumonkey 2025-06-2711:042 reply

          Hmm my memory is fuzzy but I remember seeing backend processing of xml files a lot around 2005.

          • By count 2025-06-2713:291 reply

            Yeah, I was using Novell DirXML to do XSLT processing of inbound/outbound data in 2000 (https://support.novell.com/techcenter/articles/ana20000701.h...) for directory services stuff. It was full XML body (albeit small document sizes, as they were usually user or identity style manifests from HR systems), no streaming as we know it today.

            • By agumonkey 2025-06-2714:08

              Ok, I never heard of the pre and post xml streaming era.. I got taught.

          • By reactordev 2025-06-2711:12

            But they worked on the xml body as a whole, in memory, which is where all the headaches started. Then we introduced WSDLs on top, and then we figured out streaming.

    • By bambax 2025-06-2712:102 reply

      > XSLT 1.0 is still dominant

      How, where? In 2013 I was still working a lot with XSLT and 1.0 was completely dead everywhere one looked. Saxon was free for XSLT 2 and was excellent.

      I used to do transformation of both huge documents, and large number of small documents, with zero performance problems.

      • By pmarreck 2025-06-2713:26

        Probably corps. I was working at Factset in the early 2000's when there was a big push for it and I imagine the same thing was reflected across every Microsoft shop across corporate America at the time, which (at the time) Microsoft was winning big marketshare in. (I bet there are still a ton of internal web apps that only work with IE... sigh)

        Obviously, that means there's a lot of legacy processes likely still using it.

        The easiest way to improve the situation seems to be to upgrade to a newer version of XSLT.

      • By PantaloonFlames 2025-06-2713:05

        I recently had the occasion to work with a client that was heavily invested in XML processing for a set of integrations. They’re migrating / modernizing but they’re so heavily invested in XSL that they don’t want to migrate away from it. So I conducted some perf tests and, the performance I found for xslt in .NET (“core”) was slightly to significantly better than the performance of Java (current) and Saxon. But they were both fast.

        In the early days the xsl was all interpreted. And was slow. From ~2004 or so, all the xslt engines came to be jit compiled. XSL benchmarks used to be a thing, but rapidly declined in value from then onward because the perf differences just stopped mattering.

    • By bux93 2025-06-279:332 reply

      Are you using the commercial version of Saxon? It's not expensive, and IMHO worth it for the features it supports (including the newer standards) and the performance. If I remember correctly (it was a long time ago) it does some clever optimizations.

      • By badmintonbaseba 2025-06-279:431 reply

        We didn't use Saxon, I don't work there anymore. We also supported client-side (browser) XSLT processing, as well as server-side. It might have helped on the server side, maybe could even resolve some algorithmic complexities with some memoization (possibly trading off memory consumption).

        But in the end the core problem is XSLT, the language. Despite being a complete programming language, your options are very limited for resolving performance issues when working within the language.

        • By halffullbrain 2025-06-279:52

          O(n^2) issues can typically be solved using keyed lookups, but I agree that the base processing speed is slow and the language really is too obscure to provide good DX.

          I worked with a guy who knew all about complexity analysis, but was quick to assert that "n is always small". That didn't hold - but he'd left the team by the time this became apparent.

      • By rjsw 2025-06-2710:00

        The final free version of Saxon is a lot faster than earlier ones too. My guess is that it compiles the XSLT in some way for the JVM to use.

    • By ChrisMarshallNY 2025-06-2715:031 reply

      > Even though there are newer XSLT standards, XSLT 1.0 is still dominant.

      I'm pretty sure that's because implementing XSLT 2.0 needs a proprietary library (Saxon XSLT[0]). It was certainly the case in the oughts, when I was working with XSLT (I still wake up screaming).

      XSLT 1.0 was pretty much worthless. I found that I needed XSLT 2.0, to get what I wanted. I think they are up to XSLT 3.0.

      [0] https://en.wikipedia.org/wiki/Saxon_XSLT

      • By dragonwriter 2025-06-2715:241 reply

        Are you saying it is specified that you literally cannot implement it other than on top of, or by mimicing bug-for-bug, that library (the way it was impossible to implement WebQSL without a particular version of SQLite) or is Saxon XSLT just the only existing implementation of the spec?

        • By ChrisMarshallNY 2025-06-2716:00

          Support required support from libxml/libxsl. That tops out at 1.0. I guess you could implement your own, as it’s an open standard, but I don’t think anyone ever bothered to.

          I think the guy behind Saxon may be one of the XSLT authors.

    • By mark_and_sweep 2025-06-279:361 reply

      From my experience, most simple websites are fine with XSLT 1.0 and don't experience any performance problems.

      • By badmintonbaseba 2025-06-279:45

        Sure, performance might never become a problem, it is relatively rare. But when it does there is very little you can do about it.

    • By woodpanel 2025-06-279:13

      Same here.

      A couple of blue chip websites I‘ve seen that could be completely taken down just by requesting the sitemap (more than once per minute).

      PS: That being said it is an implantation issue. But it may speak for itself that 100% of the XSLT projects I‘ve seen had it.

    • By nolok 2025-06-279:406 reply

      It's generally speaking part of the problem with the entire "XML as a savior" mindset of that earlier era and a big reason of why we left them, doesn't matter if XSLT or SOAP or even XHTML in a way ... Those were defined as machine language meant for machine talking to machine, and invariably something go south and it's not really made for us to intervene in the middle; it can be done but it's way more work than it should be; especially since they clearly never based it on the idea that those machine will sometime speak "wrong", or a different "dialect".

      It looks great, then you design your stuff and it goes great, then you deploy to the real world and everything catches on fire instantly and everytime you stop one another one starts.

      • By vjvjvjvjghv 2025-06-2718:141 reply

        Now we have "JSON as savior". I see it way too often where new people come into a project and the first thing they want to do is to replace all XML with JSON, just because. Never mind that this solves basically nothing and often introduces its own set of problems. I am not a big fan of XML but to me it's pretty low in the hierarchy of design problems.

        • By SoftTalker 2025-06-2719:05

          The only problem with XML is the verbosity of the markup. Otherwise it's a nice way to structure data without the bizarre idiosyncracies of YAML or JSON.

      • By diggan 2025-06-2710:472 reply

        > It's generally speaking part of the problem with the entire "XML as a savior" mindset of that earlier era and a big reason of why we left them

        Generally speaking I feel like this is true for a lot of stuff in programming circles, XML included.

        New technology appears, some people play around with it. Others come up with using it for something else. Give it some time, and eventually people start putting it everywhere. Soon "X is not for Y" blogposts appear, and usage finally starts to decrease as people rediscover "use the right tool for the right problem". Wait yet some more time, and a new technology appears, and the same cycle begins again.

        Seen it with so many things by now that I think "we'll" (the software community) forever be stuck in this cycle and the only way to win is to explicitly jump out of the cycle and watch it from afar, pick up the pieces that actually make sense to continue using and ignore the rest.

        • By colejohnson66 2025-06-2713:344 reply

          A controversial opinion, but JSON is that too. Not as bad as XML was (̶t̶h̶e̶r̶e̶'̶s̶ ̶n̶o̶ ̶"̶J̶S̶L̶T̶"̶)̶, but wasting cycles to manifest structured data in an unstructured textual format has massive overhead on the source and destination sides. It only took off because "JavaScript everywhere" was taking off — performance be damned. Protobufs and other binary formats already existed, but JSON was appealing because it's easily inspectable (it's plaintext) and easy to use — `JSON.stringify` and `JSON.parse` were already there.

          We eventually said, "what if we made databases based on JSON" and then came MongoDB. Worse performance than a relational database, but who cares! It's JSON! People have mostly moved away from document databases, but that's because they realized it was a bad idea for the majority of usecases.

          • By jimbokun 2025-06-2718:00

            Both XML and JSON were poor replacements for s-expressions. Combined with Lisp and Lisp macros, a more powerful data manipulation text format and language has never been created.

          • By ako 2025-06-2713:502 reply

            There is JSLT: https://github.com/schibsted/jslt and it can be useful if you need to transform a json document into another json structure.

            • By nolok 2025-06-2715:351 reply

              The people who made that are either very funny in a sarcastic, way or in severe lack of a history lesson of the area they're working in.

              • By ako 2025-06-2717:422 reply

                What is a better alternative if you just need to transform JSON from one structure to another JSON structure?

                • By asa400 2025-06-2718:28

                  Load it into a full programming language runtime and use the great collections libraries available in almost all languages to transform it and then serialize it into your target format. I want to use maps and vectors and real integers and functions and date libraries and spec libraries. String to string processing is hell.

                • By rorylaitila 2025-06-2718:38

                  Imperative code. Easy to mentally parse, comment, log, splice in other data. Why add another dependency just to go from json>json? That'd need an exceptional justification.

          • By diggan 2025-06-2715:46

            Yup, agree with everything you said!

            I think the only left out part is about people currently believing in the current hyped way, "because this time it's right!" or whatever they claim. Kind of the way TypeScript people always appear when you say that TypeScript is currently one of those hyped things and will eventually be overshadowed by something else, just like the other languages before it, then soon sure enough, someone will share why TypeScript happen to be different.

          • By imtringued 2025-06-2715:581 reply

            The fact that you bring up protobufs as the primary replacement for JSON speaks volumes. It's like you're worried about a problem that only exists in your own head.

            >wasting cycles to manifest structured data in an unstructured textual format

            JSON IS a structured textual format you dofus. What you're complaining about is that the message defines its own schema.

            >has massive overhead on the source and destination sides

            The people that care about the overhead use MessagePack or CBOR instead.

            I personally hope that I will never have to touch anything based on protobufs in my entire life. Protobuf is a garbage format that fails at the basics. You need the schema one way or another, so why isn't there a way to negotiate the schema at runtime in protobuf? Easily half or more of the questionable design decisions in protobuffers would go away if the client retrieved the schema at runtime. The compiler based workflow in Protobuf doesn't buy you a significant amount of performance in the average JS or JVM based webserver since you're copying from a JS object or POJO to a native protobuf message anyway. It's inviting an absurd amount of pain for essentially zero to no benefits. What I'm seeing here is a motte-bailey justification for making the world a worse place. The motte being the argument that text based formats are computationally wasteful, which is easily defended. The bailey being the implicit argument that hard coding the schema the way protobuf does is the only way to implement a binary format.

            Note that I'm not arguing particularly in favor of MessagePack here or even against protobuf as it exists on the wire. If anything, I'm arguing the opposite. You could have the benefits of JSON and protobuf in one. A solution so good that it makes everything else obsolete.

            • By colejohnson66 2025-06-2716:12

              I didn't say protobufs were a valid replacement - you only think I did. "Protobufs and other binary formats already existed, [..]". I was only using it as an example of a binary format that most programmers have heard of; More people know of protobufs than MessagePack and CBOR.

              Please avoid snark.

        • By colonwqbang 2025-06-2712:071 reply

          There have been many such cycles, but the XML hysteria of the 00s is the worst I can think of. It lasted a long time and the square peg XML was shoved into so many round holes.

          • By 0x445442 2025-06-2712:403 reply

            IDK, the XML hysteria is similar by comparison to the dynamic and functional languages hysterias. And it pales in comparison to the micro services, SPA and the current AI hysterias.

            • By homebrewer 2025-06-2715:182 reply

              IMHO it's pretty comparable, the difference is only in the magnitude of insanity. After all, the industry did crap out these hardware XML accelerators that were supposed to improve performance of doing massive amounts of XML transformations — is it not the GPU/TPU craze of today?

              https://en.wikipedia.org/wiki/XML_appliance

              E.g.

              https://www.serverwatch.com/hardware/power-up-xml-data-proce...

              • By bogeholm 2025-06-2718:58

                From your first link

                > An XML appliance is a special-purpose network device used to secure, manage and mediate XML traffic.

                Holy moly

              • By soulofmischief 2025-06-2716:55

                At least arrays of numbers are naturally much closer to the hardware, we've definitely come a long way in that regard.

            • By vjvjvjvjghv 2025-06-2718:15

              Exactly. Compared to microservices XML is a pretty minor problem.

            • By xorcist 2025-06-2715:05

              Agreed. Also, Docker.

      • By jimbokun 2025-06-2717:51

        It was very odd that a simple markup language was somehow seen as the savior for all computing problems.

        Markup languages are a fine and useful and powerful way for modeling documents, as in narrative documents with structure meant for human consumption.

        XML never had much to recommend it as the general purpose format for modeling all structured data, including data meant primarily for machines to produce and consume.

      • By chriswarbo 2025-06-2712:36

        > part of the problem with the entire "XML as a savior" mindset of that earlier era

        I think part of the problem is focusing on the wrong aspect. In the case of XSLT, I'd argue its most important properties are being pure, declarative, and extensible. Those can have knock-on effects, like enabling parallel processing, untrusted input, static analysis, etc. The fact it's written in XML is less important.

        Its biggest competitor is JS, which might have nicer syntax but it loses those core features of being pure and declarative (we can implement pure/declarative things inside JS if we like, but requiring a JS interpreter at all is bad news for parallelism, security, static analysis, etc.).

        When fashions change (e.g. XML giving way to JS, and JSON), we can end up throwing out good ideas (like a standard way to declare pure data transformations).

        (Of course, there's another layer to this, since XML itself was a more fashionable alternative to S-expressions; and XSLT is sort of like Lisp macros. Everything old is new again...)

      • By em-bee 2025-06-2713:031 reply

        Those were defined as machine language meant for machine talking to machine

        i don't believe this is true. machine language doesn't need the kind of verbosity that xml provides. sgml/html/xml were designed to allow humans to produce machine readable data. so they were meant for humans to talk to machines and vice versa.

        • By soulofmischief 2025-06-2717:06

          Yes, I think the main difference is having imperative vs declarative computation. With declarative computation, the performance of your code is dependent on the performance and expressiveness of the declarative layer, such as XML/XSLT. XSLT lacks the expressiveness to get around its own performance limitations.

  • By p0w3n3d 2025-06-277:124 reply

    Ok, so it might be a long shot, but I would say that

    1. the browsers were inconsistent in 1990-2000 so we started using JS to make them behave the same

    2. meanwhile the only thing we needed were good CSS styles which were not yet present and consistent behaviour

    3. over the years the browsers started behaving the same (mainly because Highlander rules - there can be only one, but Firefox is also coping well)

    4. but we already got used to having frameworks that would make the pages look the same on all browsers. Also the paradigm was switched to have json data rendered

    5. at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory.

    Why do I say that? Recently we started working on a migration from a legacy system. Looks like 2000s standard page per HTTP request. Every action like add remove etc. requires a http refresh. However it works much faster than our react system. Because:

    1. Nowadays the internet is much faster

    2. Phones have a lot of memory which is wasted by js frameworks

    3. in the backend all's almost same old story - CRUD CRUD and CRUD (+ pagination, + transactions)

    • By ozim 2025-06-279:302 reply

      AJAX and updating DOM wasn't there just to "make things faster" it was implemented there to change paradigm of "web sites" or "web documents" — because web was for displaying documents. Full page reload makes sense if you are working in a document paradigm.

      It works well here on HN for example as it is quite simple.

      There are a lot of other examples where people most likely should do a simple website instead of using JS framework.

      But "we could all go back to full page reloads" is not true, as there really are proper "web applications" out there for which full page reloads would be a terrible UX.

      To summarize there are:

      "websites", "web documents", "web forms" that mostly could get away with full page reloads

      "web applications" that need complex stuff presented and manipulated while full page reload would not be a good solution

      • By alerighi 2025-06-2711:34

        Yes, of course for web applications you can't do full page reload (you weren't either back in the days, where web applications existed in form of java applets or flash content).

        Let's face it, most uses of JS frameworks are for blogs or things that with full page reload you not even notice: nowadays browsers are advanced and only redraw the screen when finished loading the content, meaning that they would out of the box mostly do what React does (only render DOM elements who are changes), meaning that a page reload with a page that only changes one button at UI level does not result in a flicker or loading of the whole page.

        BTW, even React now is suggesting people to run the code server-side if it is possible (it's the default of Next.JS), since it makes the project easier to maintain, debug, test, as well as get better score in SEO from search engines.

        I'm still a fan of the "old" MVC models of classical frameworks such as Laravel, Django, Rails, etc. to me make overall projects that are easier to maintain for the fact that all code runs in the backend (except maybe some jQuery animation client side), model is well separated from the view, there is no API to maintain, etc.

      • By alganet 2025-06-2713:471 reply

        > full page reloads

        grug remember ancestor used frames

        then UX shaman said frame bad all sour faced frame ugly they said, multiple scrollbar bad

        then 20 years later people use fancy js to emulate frames grug remember ancestor was right

        https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...

        • By kbolino 2025-06-2715:033 reply

          Classic frames were quite bad. Every frame on a page was a separate, independent, coequal instance of the browser engine. This is almost never what you actually want. The header/footer/sidebar frames are subordinate and should not navigate freely. Bookmarks should return me to the frameset state as I left it, not the default for that URL. History should contain the frameset state I saw, not separate entries for each individual frame.

          Even with these problems, classic frames might have been salvageable, but nobody bothered to fix them.

          • By p0w3n3d 2025-06-2719:021 reply

            Iframes are no longer the thing? I must have slept over this scene

            • By kbolino 2025-06-2719:07

              By "classic frames", I mean <frameset> not <iframe>. Though iframes have some of the same problems, they don't have all of the same problems.

          • By bmacho 2025-06-2715:231 reply

            > Every frame on a page was a separate, independent, coequal instance of the browser engine. This is almost never what you actually want.

            Most frames are used for menu, navigation, frame for data, frame for additional information of data. And they are great for that. I don't think that frames are different instances of the browser engine(?) but that doesn't matter the slightest(?). They are fast and lightweight.

            > The header/footer/sidebar frames are subordinate and should not navigate freely.

            They have the ability to navigate freely but obviously they don't do that, they navigate different frames.

            • By kbolino 2025-06-2715:362 reply

              With a frameset page:

              History doesn't work right

              Bookmarks don't work right -- this applies to link sharing and incoming links too

              Back button doesn't work right

              The concept is good. The implementation is bad.

              • By bmacho 2025-06-2715:461 reply

                Yup, they are not enough for an SPA, not without javascript. And if you have javascript to handle history, URL, bookmarks and all that, you can just use divs without frames.

                • By kbolino 2025-06-2715:542 reply

                  This has nothing to do with SPAs.

                  Take the POSIX specs linked in a sibling comment.

                  Or take the classic Javadocs. I am currently looking at the docs for java.util.ArrayList. Here's a link to it from my browser's URL bar: https://docs.oracle.com/javase/8/docs/api/

                  But you didn't go to the docs for java.util.ArrayList, you went to the starting page. Ok, fine, I'll link you directly to the ArrayList docs, for which I had to "view frame source" and grab the URL: https://docs.oracle.com/javase/8/docs/api/java/util/ArrayLis...

                  Ok, but now you don't see any of the other frames, do you? And I had one of those frames pointing at the java.util class. So none of these links show you what I saw.

                  And if I look in my history, there is no entry that corresponds to what I actually saw. There are separate entries for each frame, but none of them load the frameset page with the correct state.

                  These are strongly hyperlinked reference documents. Classic use of HTML. No JavaScript or even CSS needed.

                  • By bmacho 2025-06-2716:141 reply

                    This is exactly what I wrote? But let me rephrase it: frames are not enough solely for an SPA, they can't keep state, you need javascript/dynamic webserver for that.

                    > Ok, fine, I'll link you directly to the ArrayList docs, for which I had to "view frame source" and grab the URL:

                    You could've just right click on the "frames" link, and copy the URL: https://docs.oracle.com/javase/8/docs/api/index.html?java/ut... . They use javascript to navigate based on the search params in the URL. It's not great, it should update the URL as you navigate, maybe you can send them a PR for that. (And to change state of the boxes on the left too.)

                    Also browser history handling is really messy and hard to get right, regardless of frames.

                    > And if I look in my history, there is no entry that corresponds to what I actually saw.

                    ? If you write a javascript +1 button that updates a counter, there won't be a corresponding entry in your history for the actual states of your counter. I don't see how that is a fundamental problem with javascript(?).

                    • By kbolino 2025-06-2716:231 reply

                      It's cool that they have that link. Most frame sites didn't. JS actually isn't necessary to make that work, they could have just interpolated the requested page server-side. But it only correctly points to one frame. It's the most important frame, to be fair, but it doesn't do anything for the other two frames.

                      I don't understand how pre-HTML5, non-AJAX reference docs qualify as an "SPA". This is just an ordinary web site.

              • By alganet 2025-06-2716:161 reply

                > History doesn't work right

                > Bookmarks don't work right -- this applies to link sharing and incoming links too

                > Back button doesn't work right

                Statements that apply to many JS webpages too.

                pushState/popState came years after frames lost popularity. These issues are not related to their downfall.

                Relax, dude. I'm not claiming we should use frames today. I'm saying they were simple good tools for the time.

                • By kbolino 2025-06-2716:271 reply

                  They were never good. They were always broken in these ways. For some sites, it wasn't a big deal, because the only link that ever mattered was the main link. But a lot of places that used frames were like the POSIX specs or Javadocs, and they sucked for anything other than immediate, personal use. They were not deprecated because designers hated scrollbars (they do hate them, and that sucks too, but it's beside the point).

                  And, ironically, the best way to fix these problems with frames is to use JavaScript.

                  • By alganet 2025-06-2716:511 reply

                    > They were never good

                    They were good enough.

                    > For some sites, it wasn't a big deal

                    Precisely my point.

                    > POSIX specs or Javadocs

                    Hey, they work for me.

                    > the best way to fix these problems with frames is to use JavaScript.

                    Some small amounts of javascript. Mainly, proxy the state for the main frame to the address bar. No need for virtual dom, babel, react, etc.

                    --

                    _Again_, you're arguing like I'm defending frames for use today. That's not what I'm doing.

                    Many websites follow a "left navigation, center content" overall layout, in which the navigation stays somehow stationary and the content is updated. Frames were broken, but were in the right direction. You're nitpicking on the ways they were broken instead of seeing the big picture.

                    • By kbolino 2025-06-2717:091 reply

                      Directionally correct but badly done can poison an idea. Frames sucked and never got better.

                      Along with other issues, this gave rise to AJAX and SPAs and JS frameworks. A big part of how we got where we are today is because the people making the web standards decided to screw around with XHTML and "the semantic web" (another directionally correct but badly done thing!) and other BS for about a decade instead of improving the status quo.

                      So we can and often should return to ancestor but if we're going to lay blame and trace the history, we ought to do it right.

                      • By alganet 2025-06-2717:501 reply

                        Your history is off, and you are mixing different eras and browser standards with other initiatives.

                        Frames gave place to (the incorrect use of) tables. The table era was way worse than it is today. Transparent gif spacers, colspan... it was all hacks.

                        The table era gave birth to a renewal of web standards. This ran mostly separately from the semantic web (W3C is a consortium, not a single central group).

                        The table era finally gave way to the jQuery era. Roughly around this time, browser standards got their shit together... but vendors didn't.

                        Finally, the jQuery era ended with the rise of full JS frameworks (backbone first, then ember, winjs, angular, react). Vendors operating outside standards still dominate in this era.

                        There's at least two whole generations between frames and SPAs. That's why I used the word "ancestor", it's 90s tech I barely remember because I was a teenager. All the other following eras I lived through and experienced first hand.

                        The poison on the frames idea wore off ages ago. The fact that websites not made with them resemble their use is a proof of that, they just don't share the same implementation. The "idea" is seen with kind eyes today.

                        • By kbolino 2025-06-2718:161 reply

                          I feel like we're mostly in violent agreement.

                          The key point about frames in the original context of this thread as I understood it was that they allowed a site to only load the content that actually changes. So accounting for the table-layout era doesn't really change my perspective: frames were so bad, that web sites were willing to regress to full-page-loads instead, at least until AJAX came along -- though that also coincides with the rise of the (still ongoing) div-layout era.

                          I agree wholeheartedly that the concept of partial page reloading in a rectilinear grid is alive and well. Doing that with JavaScript and CSS is the whole premise of an SPA as I understand it, and those details are key to the difference between now and the heyday of frames. But there was also a time when full-page-loading was the norm between the two eras, reflecting the disillusionment with frames as they were implemented and ossified.

                          The W3C (*) spent a good few years working on multiple things most of which didn't pan out. Maybe I'm being too harsh, but it feels like a lot of their working groups just went off and disconnected from practice and industry for far too long. Maybe that was tangential to the ~decade-long stagnation of web standards, but that doesn't really change the point of my criticism.

                          * = Ecma has a part in this too, since JavaScript was standardized by them instead of W3C for whatever reason, and they also went off into la-la land for roughly the same period of time

                          • By alganet 2025-06-2719:07

                            > I feel like we're mostly in violent agreement.

                            Probably, yes!

                            > So accounting for the table-layout era doesn't really change my perspective: frames were so bad, that web sites were willing to regress to full-page-loads instead

                            That's where we disagree.

                            From my point of view, what brought sites to full page loads were designers. Design folk wanted to break out of the "left side navigation, right content" mold and make good looking visual experiences.

                            This all started with sites like this:

                            https://www.spacejam.com/1996/

                            This website is a interstitial fossil between frames and full table nightmare. The homepage represents what (at the time) was a radical way of experiencing the web.

                            It still carries vestiges of frames in other sections:

                            https://www.spacejam.com/1996/cmp/jamcentral/jamcentralframe...

                            However, the home is their crown jewel and it is representative of the years that followed.

                            This new visual experience was enough to discard partial loading. And for a while, it stayed like this.

                            JS up to this point was still a toy. DHTML, hover tricks, trinkets following the mouse cursor. It was unthinkable to use it to manage content.

                            It was not until CSS zen garden, in 2003, that things started to shift:

                            https://csszengarden.com/pages/about/

                            Now, some people were saying that you could do pretty websites without tables. By this time, frames were already forgotten and obsolete.

                            So, JS never killed frames. There was a whole generation in between that never used frames, but also never used JS to manage content (no AJAX, no innerHTML shinenigans, nothing).

          • By alganet 2025-06-2715:381 reply

            You can see frames in action on the POSIX spec:

            https://pubs.opengroup.org/onlinepubs/9799919799/

            They can navigate targeting any other frame. For example, clicking "System Interfaces" updates the bottom-left navigation menu, while keeping the state of the main document frame.

            It's quite simple, just uses the `target` attribute (target=blank remains popular as a vestigial limb of this whole approach).

            This also worked with multiple windows (yes, there were multi-window websites that could present interactions that handled multiple windows).

            The popular iframe is sort of salvaged from frame tech, it is still used extensively and not deprecatred.

            • By kbolino 2025-06-2715:40

              An iframe is inherently subordinate. This solves one of the major issues with classic frames.

              Classic frames are simple. Too simple. Your link goes to the default state of that frameset. Can you link me any non-default state? Can I share a link to my current state with you?

    • By viraptor 2025-06-277:532 reply

      That timeline doesn't sound right to me. JS was rarely used to standardise behaviour - we had lots of user agent detection and relying on quirks ordering to force the right layout. JS really was for the interactivity at the beginning - DHTML and later AJAX. I don't think it even had easy access to layout related things? (I may be mistaken though) CSS didn't really make things more consistent either - once it became capable it was still a mess. Sure, CSS garden was great and everyone was so impressed with semantic markup while coding tables everywhere. It took ages for anything to actually pass first two ACIDs. I'm not sure frameworks ever really impacted the "consistent looks" side of things - by the time we grew out of jQuery, CSS was the looks thing.

      Then again, it was a long time. Maybe it's me misremembering.

      • By jonwinstanley 2025-06-278:035 reply

        For me, JQuery was the thing that fixed the browser inconsistencies. If you used JQuery for everything, your code worked in all the browsers.

        This was maybe 2008?

        • By Cthulhu_ 2025-06-279:221 reply

          Before jQuery there was Prototype.js, part of early AJAX support in RoR, which fixed inconsistencies in how browsers could fetch data, especially in the era between IE 5 and 7 (native JS `XMLHttpRequest` was only available from IE 7 onwards, before that it was some ActiveX thing. The other browsers supported it from the get go). My memory is vague, but it also added stuff like selectors, and on top of that was script.aculo.us which added animations and other such fanciness.

          jQuery took over very quickly though for all of those.

          • By arkh 2025-06-279:381 reply

            > native JS `XMLHttpRequest` was only available from IE 7 onwards, before that it was some ActiveX thing.

            Almost sure it was available on IE6. But even if not, you could emulate it using hidden iframes to call pages which embedded some javascript interacting with the main page. I still have fond memories of using mootools for lightweight nice animations and less fond ones of dojo.

            • By JimDabell 2025-06-2711:13

              Internet Explorer 5–6 was the ActiveX control. Then other browsers implemented XMLHTTPRequest based on how that ActiveX control worked, then Internet Explorer 7 implemented it without ActiveX the same way as the other browsers, and then WHATWG standardised it.

              Kuro5hin had a dynamic commenting system based on iframes like you describe.

        • By JimDabell 2025-06-278:35

          jQuery in ~2008 was when it kinda took off, but jQuery was itself an outgrowth of work done before it on browser compatibility with JavaScript. In particular, events.

          Internet Explorer didn’t support DOM events, so addEventListener wasn’t cross-browser compatible. A lot of people put work in to come up with an addEvent that worked consistently cross-browser.

          The DOMContentLoaded event didn’t exist, only the load event. The load event wasn’t really suitable for setting up things like event handlers because it would wait until all external resources like images had been loaded too, which was a significant delay during which time the user could be interacting with the page. Getting JavaScript to run consistently after the DOM was available, but without waiting for images was a bit tricky.

          These kinds of things were iterated on in a series of blog posts from several different web developers. One blogger would publish one solution, people would find shortcomings with it, then another blogger would publish a version that fixed some things, and so on.

          This is an example of the kind of thing that was happening, and you’ll note that it refers to work on this going back to 2001:

          https://robertnyman.com/2006/08/30/event-handling-in-javascr...

          When jQuery came along, it was really trying to achieve two things: firstly, incorporating things like this to help browser compatibility; and second, to provide a “fluent” API where you could chain API calls together.

        • By viraptor 2025-06-2710:34

          I wasn't clear, jQuery was definitely used for browser inconsistencies, but in behaviour, but layout. It had just a small overlap with CSS functionality (at first, until it all got exposed to JS)

        • By jbverschoor 2025-06-278:101 reply

          Probably 2005.

          2002, I was using “JSRS”, and returning http 204/no content, which causes the browser to NOT refresh/load the page.

          Just for small interactive things, like a start/pause button for scheduled tasks. The progress bar etc.

          But yeah, in my opinion we lost about 15 years of proper progress.

          The network is the computer came true

          The SUN/JEE model is great.

          It’s just that monopolies stifle progress and better standards.

          Standards are pretty much dead, and everything is at the application layer.

          That said.. I think XSLT sucks, although I haven’t touched it in almost 20 years. The projects I was on, there was this designer/xslt guru. He could do anything with it.

          XPath is quite nice though

          • By JimDabell 2025-06-278:401 reply

            > But yeah, in my opinion we lost about 15 years of proper progress.

            Internet Explorer 6 was released in 2001 and didn’t drop below 3% worldwide until 2015. So that’s a solid 14 years of paralysis in browser compatibility.

            • By jbverschoor 2025-06-2710:52

              Time flies when you’re having fun

        • By benediktwerner 2025-06-278:361 reply

          Wasn't it more about inconsistencies in JS though? For stuff which didn't need JS at all, there also shouldn't be much need for JQuery.

          • By dspillett 2025-06-279:35

            jQuery, along with a number of similar attempts and more single-item-focused polyfills¹ was as much about DOM inconsistencies as JS ones. It was also about making dealing with the DOM more convenient² even where it was already consistent between commonly used browsers.

            DOM manipulation of that sort is JS dependent, of course, but I think considering language features and the environment, like the DOM, to be separate-but-related concerns is valid. There were less kitchen-sink-y libraries that only concentrated on language features or specific DOM features. Some may even consider a few parts in a third section: the standard library, though that feature set might be rather small (not much more than the XMLHTTPRequest replacement/wrappers?) to consider its own thing.

            > For stuff which didn't need JS at all, there also shouldn't be much need for JQuery.

            That much is mostly true, as it by default didn't do anything to change non-scripted pages. Some polyfills for static HTML (for features that were inconsistent, or missing entirely in, usually, old-IE) were implemented as jQuery plugins though.

            --------

            [1] Though I don't think they were called that back then, the term coming later IIRC.

            [2] Method chaining³, better built-in searching and filtering functions⁴, and so forth.

            [3] This divides opinions a bit though was generally popular, some other libraries did the same, others tried different approaches.

            [4] Which we ended up coding repeatedly in slightly different ways when needed otherwise.

      • By middleagedman 2025-06-279:221 reply

        Old guy here. Agreed- the actual story of web development and JavaScript’s use was much different.

        HTML was the original standard, not JS. HTML was evolving early on, but the web was much more standard than it was today.

        Early-mid 1990s web was awesome. HTML served HTTP, and pages used header tags, text, hr, then some backgound color variation and images. CGI in a cgi-bin dir was used for server-side functionality, often written in Perl or C: https://en.m.wikipedia.org/wiki/Common_Gateway_Interface

        Back then, if you learned a little HTML, you could serve up audio, animated gifs, and links to files, or Apache could just list files in directories to browse like a fileserver without any search. People might get a friend to let them have access to their server and put content up in it or university, etc. You might be on a server where they had a cgi-bin script or two to email people or save/retrieve from a database, etc. There was also a mailto in addition to href for the a (anchor) tag for hyperlinks so you could just put you email address there.

        Then a ton of new things were appearing. PhP on server-side. JavaScript came out but wasn’t used much except for a couple of party tricks. ColdFusion on server-side. Around the same time was VBScript which was nice but just for IE/Windows, but it was big. Perl then PhP were also big on server-side. If you installed Java you could use Applets which were neat little applications on the page. Java Web Server came out serverside and there were JSPs. Java Tomcat came out on server-side. ActionScript came out to basically replace VBScript but do it on serverside with ASPs. VBScript support went away.

        During this whole time, JavaScript had just evolved into more party tricks and thing like form validation. It was fun, but it was PhP, ASP, JSP/Struts/etc. serverside in early 2000s, with Rails coming out and ColdFusion going away mostly. Facebook was PhP mid-2000s, and LAMP stack, etc. People breaking up images using tables, CSS coming out with slow adoption. It wasn’t until mid to later 2000s until JavaScript started being used for UI much, and Google’s fostering of it and development of v8 where it was taken more seriously because it was slow before then. And when it finally got big, there was an awful several years where it was framework after framework super-JavaScript ADHD which drove a lot of developers to leave web development, because of the move from server-side to client-side, along with NoSQL DBs, seemingly stupid things were happening like client-side credential storage, ignoring ACID for data, etc.

        So- all that to say, it wasn’t until 2007-2011 before JS took off.

        • By nasduia 2025-06-2710:411 reply

          Though much less awesome was all the Flash, Realplayer and other plugins required.

          • By sim7c00 2025-06-2712:121 reply

            Realplayer. christ, forgot all about that one.... thanks... frozenface

            • By p0w3n3d 2025-06-2712:351 reply

              ah the feelings. those were the times

              • By viraptor 2025-06-2712:43

                If your site didn't have a flash animated menu, was it even a real website at that time?

    • By bob1029 2025-06-279:151 reply

      > at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory

      I've got a .NET/Kestrel/SQLite stack that can crank out SSR responses in no more than ~4 milliseconds. Average response time is measured in hundreds of microseconds when running release builds. This is with multiple queries per page, many using complex joins to compose view-specific response shapes. Getting the data in the right shape before interpolating HTML strings can really help with performance in some of those edges like building a table with 100k rows. LINQ is fast, but approaches like materializing a collection per row can get super expensive as the # of items grows.

      The closer together you can get the HTML templating engine and the database, the better things will go in my experience. At the end of the day, all of that fancy structured DOM is just a stream of bytes that needs to be fed to the client. Worrying about elaborate AST/parser approaches when you could just use StringBuilder and clever SQL queries has created an entire pointless, self-serving industry. The only arguments I've ever heard against using something approximating this boil down to arrogant security hall monitors who think developers cant be trusted to use the HTML escape function properly.

      • By chriswarbo 2025-06-2713:01

        > arrogant security hall monitors who think developers cant be trusted to use the HTML escape function properly.

        Unfortunately, they're not actually wrong though :-(

        Still, there are ways to enforce escaping (like preventing "stringly typed" programming) which work perfectly well with streams of bytes, and don't impose any runtime overhead (e.g. equivalent to Haskell's `newtype`)

    • By em-bee 2025-06-277:411 reply

      at the current technology we could cope with server generated old-school web pages because they would have low footprint, work faster and require less memory.

      unless you have a high latency internet connection: https://news.ycombinator.com/item?id=44326816

      • By p0w3n3d 2025-06-277:452 reply

        however when you have a high latency connection, the "thick client" json-filled webapp will only have its advantages if the most of the business logic happens on the browser. I.e. Google Docs - great and much better than it used to be in 2000s design style. Application that searches the apartments to rent? Not really I would say.

        -- edit --

        by the way in 2005 I programmed using very funny PHP framework PRADO that was sending every change in the UI to the server. Boy it was slow and server heavy. This was the direction we should have never gone...

        • By em-bee 2025-06-278:001 reply

          Application that searches the apartments to rent? Not really I would say.

          not a good example. i can't find it now, but there was a story/comment about a realtor app that people used to sell houses. often when they were out with a potential buyer they had bad internet access and loading new data and pictures for houses was a pain. it wasn't until they switched to using a frontend framework to preload everything with the occasional updates that the app became usable.

          low latency affects any interaction with a site. even hackernews is a pain to read over low latency and would improve if new comments where loaded in the background. the problem creeps up on you faster than you think.

          • By _heimdall 2025-06-2711:021 reply

            Prefetching pages doesn't require a frontend framework though. All it takes is a simple script to preload all or specific anchor links on the page, or you could get fancier with a service worker and a site manifest if you want to preload pages that may not be linked on the current page.

        • By catmanjan 2025-06-278:451 reply

          Lol you'd hate to see what blazor is doing then

          • By Tade0 2025-06-278:52

            Or Phoenix.LiveView for that matter.

  • By CiaranMcNulty 2025-06-276:417 reply

    It's sad how the bloat of '00s enterprise XML made the tech seem outdated and drove everyone to 'cleaner' JSON, because things like XSLT and XPath were very mature and solved a lot of the problems we still struggle with in other formats.

    I'm probably guilty of some of the bad practice: I have fond memories of (ab)using XSLT includes back in the day with PHP stream wrappers to have stuff like `<xsl:include href="mycorp://invoice/1234">`

    This may be out-of-date bias but I'm still a little uneasy letting the browser do the locally, just because it used to be a minefield of incompatibility

    • By Cthulhu_ 2025-06-279:253 reply

      It's been 84 years but I still miss some of the "basics" of XML in JSON - a proper standards organization, for one. But things like schemas were (or, felt like) so much better defined in XML land, and it took nearly a decade for JSON land to catch up.

      Last thing I really did with XML was a technology called EXI, a transfer method that converted an XML document into a compressed binary data stream. Because translating a data structure to ASCII, compressing it, sending it over HTTP etc and doing the same thing in reverse is a bit silly. At this point protobuf and co are more popular, but imagine if XML stayed around. It's all compatible standards working with each other (in my idealized mind), whereas there's a hard barrier between e.g. protobuf/grpc and JSON APIs. Possibly for the better?

      • By bokchoi 2025-06-2713:24

        I just leaned about EXI as it's being used on a project I work on. It's quite amazingly fast and small! It is a binary representation of the xml stream. It can compress quite small if you have an xmlschema to go with your xml.

        I was curious about how it is implemented and I found the spec easy to read and quite elegant: https://www.w3.org/TR/exi/

      • By sumtechguy 2025-06-2712:16

        That data transform thing xslt could do was so cool. You could twist it into emitting just about any other format and XML was the top layer. You want it in tab delimited yaml. Feed it the right style sheet and there you go. Other system wants CSV. Sure thing different style sheet and there you go.

        For a transport tech XML was OK. Just wasted 20% of your bandwidth on being a text encoding. Plus wrapping your head around those style sheets was a mind twister. Not surprised people despise it. As it has the ability to be wickedly complex for no real reason.

      • By chrisweekly 2025-06-2711:45

        84 years? nope.

    • By rwmj 2025-06-278:402 reply

      XML is fine. A bit wordy, but I appreciate its precision and expressiveness compared to YAML.

      XPath is kind of fine. It's hard to remember all the syntax but I can usually get there with a bit of experimentation.

      XSLT is absolutely insane nonsense and needs to die in a fire.

      • By cturner 2025-06-2711:38

        It depends what you use it for. I worked on a interbank messaging platform that normalised everything into a series of standard xml formats, and then used xslt for representing data to the client. Common use case - we could rerender data to what a receiver’s risk system were expecting in config (not compiled code). You could have people trained in xslt doing that, they did not need to be more experienced developers. Fixes were fast. It was good for this. Another time i worked on a production pipeline for a publisher of education books. Again, data stored in normalised xml. Xslt is well suited to mangling in that scenario.

      • By tclancy 2025-06-2714:28

        That's funny, I would reverse those. I loved XSLT though it took me a long time for it to click; it was my gateway drug to concepts like functional programming and idempotency. XPath is pretty great too. The problem was XML, but it isn't inherent to it -- it empowered (for good and bad) lots of people who had never heard of data normalization to publish data and some of it was good but, like Irish Alzheimer's, we only remember the bad ones.

    • By kllrnohj 2025-06-2717:05

      The game Rimworld stores all its game configuration data in XML and uses XPath for modding and it's so incredibly good. It's a seriously underrated combination for enabling relatively stable local modifications of data. I don't know of any other game that does this, probably because XML has a reputation of being "obsolete" or whatever. But it's just such a robust system for this use case.

      https://rimworldwiki.com/wiki/Modding_Tutorials/PatchOperati...

    • By tannhaeuser 2025-06-2712:45

      > bloat of '00s enterprise XML

      True, and it's even more sad that XML was originally just intended as a simplified subset of SGML (HTML's meta syntax with tag inference and other shortforms) for delivery of markup on the web and to evolve markup vocabularies and capabilities of browsers (of which only SVG and MathML made it). But when the web hype took over, W3C (MS) came up with SOAP, WS-this and WS-that, and a number of programming languages based on XML including XSLT (don't tell HNers it was originally Scheme but absolutely had to be XML just like JavaScript had to be named after Java; such was the madness).

    • By codeulike 2025-06-277:261 reply

      Xpath would have been nice if you didnt have to pedantically namespace every bit of every query

      • By masklinn 2025-06-278:261 reply

        That… has nothing to do with xpath?

        If your document has namespaces, xpath has to reflect that. You can either tank it or explicitly ignore namespaces by foregoing the shorthands and checking `local-name()`.

        • By codeulike 2025-06-2710:413 reply

          Ok. Perhaps 'namespace the query' wasnt quite the right way of explaining it. All I'm saying is, whenever I've used xpath, instead of it looking nice like

          /*bookstore/*book/*title

          its been some godawful mess like

          /*[name()='bookstore']/*[name()='book']/*[name()='title']

          ... I guess because they couldn't bear to have it just match on tags as they are in the file and it had to be tethered to some namespace stuff that most people dont bother with. A lot of XML is ad-hoc without a namespace defined anywhere

          Its like

          Me: Hello Xpath, heres an XML document, please find all the bookstore/book/title tags

          Xpath: *gasps* Sir, I couldn't possibly look for those tags unless you tell me which namespace we are in. Are you some sort of deviant?

          Me: oh ffs *googles xpath name() syntax*

          • By masklinn 2025-06-2712:22

            > the tags as they are in the file

            Is not actually relevant and is not an information the average XML processor even receives. If the file uses a default namespace (xmlns), then the elements are namespaced, and anything processing the XML has to either properly handle namespaces or explicitly ignore namespaces.

            > A lot of XML is ad-hoc without a namespace defined anywhere

            If the element is not namespaced xpath does not require a prefix, you just write

                //bookstore/book/title

          • By ndriscoll 2025-06-2712:14

            I don't recall ever needing to do that for unnamespaced tags. Are you sure the issue you're having isn't that the tags have a namespace?

            my:book is a different thing from your:book and you generally don't want to accidentally match on both. Keeping them separate is the entire point of namespaces. Same as in any programming language.

          • By rhdunn 2025-06-2711:57

            Newer versions of XPath and XSLT allow

                /*:bookstore/*:book/*:title

    • By aitchnyu 2025-06-279:36

      In the 2003 The Art of Unix Programming, the author advocated bespoke text formats and writing parsers for them. Writing xml by hand is his list of war crimes. Since then syntax highlighting and autocomplete and autoformatting narrowed the effort gap and tolerant parsers (browsers being the main example) got a bad rap. Would Markdown and Yaml exist with modern editors?

    • By maxloh 2025-06-278:063 reply

      However, XML is actually a worse format to transfer over the internet. It's bloated and consumes more bandwidth.

      • By JimDabell 2025-06-278:54

        XML is a great format for what it’s intended for.

        XML is a markup language system. You typically have a document, and various parts of it can be marked up with metadata, to an arbitrary degree.

        JSON is a data format. You typically have a fixed schema and things are located within it at known positions.

        Both of these have use-cases where they are better than the other. For something like a web page, you want a markup language that you progressively render by stepping through the byte stream. For something like a config file, you want a data format where you can look up specific keys.

        Generally speaking, if you’re thinking about parsing something by streaming its contents and reacting to what you see, that’s the kind of application where XML fits. But if you’re thinking about parsing something by loading it into memory and looking up keys, then that’s the kind of application where JSON fits.

      • By bokchoi 2025-06-2713:27

        Check out EXI. It compresses the xml stream into a binary encoding and is quite small and fast:

        https://www.w3.org/TR/exi/

      • By rwmj 2025-06-278:41

        Only if you never use compression.

HackerNews