Comments

  • By eadmund 2025-05-2312:4321 reply

    > So, yes, instead of saying that "e" equals "65537", you're saying that "e" equals "AQAB". Aren't you glad you did those extra steps?

    Oh JSON.

    For those unfamiliar with the reason here, it’s that JSON parsers cannot be relied upon to treat numbers properly. Is 4723476276172647362476274672164762476438 a valid JSON number? Yes, of course it is. What will a JSON parser due with it? Silently truncate it to a 64-bit or 63-bit integer, or a float, probably or if you’re very lucky emit an error (a good JSON decoder written in a sane language like Common Lisp would of course just return the number, but few of us are so lucky).

    So the only way to reliably get large integers into and out of JSON is to encode them as something else. Base64-encoded big-endian bytes is not a terrible choice. Silently doing the wrong thing is the root of many security errors, so it not wrong to treat every number in the protocol this way. Of course, then one loses the readability of JSON.

    JSON is better than XML, but it really isn’t great. Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.

    • By cortesoft 2025-05-2321:005 reply

      > Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.

      I feel like not understanding why JSON won out is being intentionally obtuse. JSON can easily be hand written, edited, and read for most data. Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand. If you have a JSON object you want to hand edit, you can just type... for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.

      You might not think the ability to hand generate, read, and edit is important, but I am pretty sure that is a big reason JSON has won in the end.

      Oh, and the Ruby JSON parser handles that large number just fine.

      • By motorest 2025-05-2416:323 reply

        > I feel like not understanding why JSON won out is being intentionally obtuse. JSON can easily be hand written, edited, and read for most data.

        You are going way out of your way to try to come up with ways to rationalize why JSON was a success. The ugly truth is far simpler than what you're trying to sell: it was valid JavaScript. JavaScript WebApps could parse JSON with a call to eval(). No deserialization madness like XML, no need to import a parser. Just fetch a file, pass it to eval(), and you're done.

        • By nextaccountic 2025-05-254:131 reply

          In other words, the thing that made JSON initially succeed was also a giant security hole

          • By motorest 2025-05-255:32

            > In other words, the thing that made JSON initially succeed was also a giant security hole

            Perhaps, but it's not a major concern when you control both the JavaScript frontend and whatever backend it consumes. In fact, arguably this technique is still pretty much in use today with the way WebApps get a hold of CSRF tokens. In this scenario security is a lesser concern than, say, input validation.

        • By jaapz 2025-05-2421:361 reply

          But also, all the other reasons written by the person you replied to

          • By motorest 2025-05-255:28

            > But also, all the other reasons written by the person you replied to

            Not really. JSON's mass adoption is tied to JavaScript's mass adoption, where sheer convenience and practicality dictates it's whole history most of the current state. Sending JavaScript fragments from the backend is a technique that didn't really stopped being used just because someone rolled out a JSON parser.

            I think some people feel compelled to retroactively make this whole thing more refined and elegant because for some the ugly truth is hard to swallow.

        • By amne 2025-05-2418:40

          it's in the name after all: [j]ava[s]cript [o]bject [n]otation

      • By pharrington 2025-05-240:121 reply

        The entire reason ACME exists is because you are never writing or reading the CSR by hand.

        So of course, ACME is based around a format whose entire reason d'etre is being written and read by hand.

        It's weird.

        • By thayne 2025-05-242:22

          The reason json is a good format for ACME isn't that it is easy to read and write by hand[1], but that most languages have at least one decent json implementation available, so it is easier to implement clients in many different languages.

          [1]: although being easy to read by humans is an advantage when debugging why something isn't working.

      • By eadmund 2025-05-2321:176 reply

        > I feel like not understanding why JSON won out is being intentionally obtuse.

        I didn’t feel like my comment was the right place to shill for an alternative, but rather to complain about JSON. But since you raise it.

        > JSON can easily be hand written, edited, and read for most data.

        So can canonical S-expressions!

        > Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand.

        Which is why the advanced representation exists. I contend that this:

            (urn:ietf:params:acme:error:malformed
             (detail "Some of the identifiers requested were rejected")
             (subproblems ((urn:ietf:params:acme:error:malformed
                            (detail "Invalid underscore in DNS name \"_example.org\"")
                            (identifier (dns _example.org)))
                           (urn:ietf:params:acme:error:rejectedIdentifier
                            (detail "This CA will not issue for \"example.net\"")
                            (identifier (dns example.net))))))
        
        is far easier to read than this (the first JSON in RFC 8555):

            {
                "type": "urn:ietf:params:acme:error:malformed",
                "detail": "Some of the identifiers requested were rejected",
                "subproblems": [
                    {
                        "type": "urn:ietf:params:acme:error:malformed",
                        "detail": "Invalid underscore in DNS name \"_example.org\"",
                        "identifier": {
                            "type": "dns",
                            "value": "_example.org"
                        }
                    },
                    {
                        "type": "urn:ietf:params:acme:error:rejectedIdentifier",
                        "detail": "This CA will not issue for \"example.net\"",
                        "identifier": {
                            "type": "dns",
                            "value": "example.net"
                        }
                    }
                ]
            }
        
        > for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.

        As you can see, no you do not.

        • By thayne 2025-05-242:182 reply

          Your example uses s-expressions, not canonical s-expressions. Canonical s expressions[1] is basically a binary format. Each atom/string is prefixed by a decimal length of the string and a colon. It's advantage over regular s expressions is that there is no need to escape or quote strings with whitespace, and there is only a single possible representation for a given data structure. The disadvantage is it is much harder to write and read by humans.

          As for s-expressions vs json, there are pros and cons to each. S-expressions don't have any way to encode type information in the data itself, you need a schema to know if a certain value should be treated as a number or a string. And it's subjective which is more readable.

          [1]: https://en.m.wikipedia.org/wiki/Canonical_S-expressions

          • By eadmund 2025-05-2413:101 reply

            > Your example uses s-expressions, not canonical s-expressions.

            I’ve always used ‘canonical S-expressions’ to refer to Rivest’s S-expressions proposal: https://www.ietf.org/archive/id/draft-rivest-sexp-13.html, a proposal which has canonical, basic transport & advanced transport representations which are all equivalent to one another (i.e., every advanced transport representation has a single canonical representation). I don’t know where I first saw it, but perhaps it was intended to distinguish from other S-expressions such as Lisp’s or Scheme’s?

            Maybe I should refer to them as ‘Rivest S-expressions’ or ‘SPKI S-expressions’ instead.

            > S-expressions don't have any way to encode type information in the data itself, you need a schema to know if a certain value should be treated as a number or a string.

            Neither does JSON, as this whole thread indicates. This applies to other data types, too: while a Rivest expression could be

                (date [iso8601]2025-05-24T12:37:21Z)
            
            JSON is stuck with:

                {
                  "date": "2025-05-24T12:37:21Z"
                }
            
            > And it's subjective which is more readable.

            I really disagree. The whole reason YAML exists is to make JSON more readable. Within limits, the more data one can have in a screenful of text, the better. JSON is so terribly verbose if pretty-printed that it takes up screens and screens of text to represent a small amount of data — and when not pretty-printed, it is close to trying to read a memory trace.

            Edit: updated link to the January 2025 proposal.

            • By antonvs 2025-05-2414:212 reply

              That Rivest draft defines canonical S-expressions to be the format in which every token is preceded by its length, so it's confusing to use "canonical" to describe the whole proposal, or use it as a synonym for the "advanced" S-expressions that the draft describes.

              But that perhaps hints at some reasons that formats like JSON tend to win popularity contests over formats like Rivest's. JSON is a single format for authoring and reading, which doesn't address transport at all. The name is short, pronounceable (vs. "spikky" perhaps?), and clearly refers to one thing - there's no ambiguity about whether you might be talking about a transport encoding instead,

              I'm not saying these are good reasons to adopt JSON over SPKI, just that there's a level of ambition in Rivest's proposal which is a poor match for how adoption tends to work in the real world.

              There are several mechanism for JSON transport encoding - including plain old gzip, but also more specific formats like MessagePack. There isn't one single standard for it, but as it turns out that really isn't that important.

              Arguably there's a kind of violation of separation of concerns happening in a proposal that tries to define all these things at once: "a canonical form ... two transport representations, and ... an advanced format".

              • By wat10000 2025-05-2416:151 reply

                JSON also had the major advantage of having an enormous ecosystem from day 1. It was ugly and kind of insecure, but the fact that every JavaScript implementation could already parse and emit JSON out of the box was a huge boost. It’s hard to beat that even if you have the best format in the world.

                • By antonvs 2025-05-2417:261 reply

                  Haha yes, that does probably dwarf any other factors.

                  But still, I think if the original JSON spec had been longer and more comprehensive, along the lines of Rivest's, that could have limited JSON's popularity, or resulted in people just ignoring parts of it and focusing on the parts they found useful.

                  The original JSON RFC-4627 was about 1/3rd the size of the original Rivest draft (a body of 260 lines vs. 750); it defines a single representation instead of four; and e.g. the section on "Encoding" is just 3 sentences. Here it is, for reference: https://www.ietf.org/rfc/rfc4627.txt

                  • By wat10000 2025-05-2417:48

                    We already see that a little bit. JSON in theory allows arbitrary decimal numbers, but in practice it’s almost always limited to numbers that are representable as an IEEE-754 double. It used to allow UTF-16 and UTF-32, but in practice only UTF-8 was widely accepted, and that eventually got reflected in the spec.

                    I’m sure you’re right. If even this simple spec exceeded what people would actually use as a real standard, surely anything beyond that would also be left by the wayside.

              • By kevin_thibedeau 2025-05-2414:471 reply

                > clearly refers to one thing

                Great, this looks like JSON. Is it JSON5? Does it expect bigint support? Can I use escape chars?

                • By antonvs 2025-05-2417:111 reply

                  You're providing an example of my point. People don't, in general, care about any of that, so "solving" those "problems" isn't likely to help adoption.

                  To your specific points:

                  1. JSON5 didn't exist when JSON adoption occurred, and in any case they're pretty easy to tell apart, because JSON requires keys to be quoted. This is a non-problem. Why do you think it might matter? Not to mention that the existence of some other format that resembles JSON is hardly a reflection on JSON itself, except perhaps as a compliment to its perceived usefulness.

                  2. Bigint support is not a requirement that most people have. It makes no difference to adoption.

                  3. Escape character handling is pretty well defined in ECMA 404. Your point is so obscure I don't even know specifically what you might be referring to.

                  • By thayne 2025-05-2419:49

                    I agree with most of what you said, but json's numbers are problematic. For one thing, many languages have 64-bit integers, which can't be precisely represented as a double, so serializing such a value can lead to subtle bug if it is deserialized by a parser that only supports doubles. And deserializing in languages that have multiple numeric types is complicated, since the parser often doesn't have enough context to know what the best numeric type to use is.

          • By dietr1ch 2025-05-244:40

            The length thing sounds like an editor problem, but we have wasted too much time in coming up with syntax that pleases personal preferences without admitting we would be better off moving away from text.

            927 can be avoided, but it's way harder than it seems, which is why we have the proliferation of standards that fail to become universal.

        • By eximius 2025-05-2321:352 reply

          For you, perhaps. For me, the former is denser, but crossing into a "too dense" region. The JSON has indentation which is easy on my poor brain. Also, it's nice to differentiate between lists and objects.

          But, I mean, they're basically isomorphic with like 2 things exchanges ({} and [] instead of (); implicit vs explicit keys/types).

          • By josephg 2025-05-2412:32

            Yeah. I don’t even blame S-expressions. I think I’ve just been exposed to so much json at this point that my visual system has its own crappy json parser for pretty-printed json.

            S expressions may well be better. But I don’t think S expressions are better enough to be able to overcome json’s inertia.

          • By em-bee 2025-05-2912:35

            even as a fan of s-expressions (see my other comment), i have to agree. but the problem here is the formatting. for starters, i would write the s-expression example as:

                (urn:ietf:params:acme:error:malformed
                 (detail      "Some of the identifiers requested were rejected")
                 (subproblems ((urn:ietf:params:acme:error:malformed
                                (detail     "Invalid underscore in DNS name \"_example.org\"")
                                (identifier (dns _example.org)))
                               (urn:ietf:params:acme:error:rejectedIdentifier
                                (detail     "This CA will not issue for \"example.net\"")
                                (identifier (dns example.net))))))
            
            the alignment of the values makes them easier to pick out and gives a visual structure

            but, i would also argue that the two examples are not equivalent. what is explicitly specified as "type" and "value" in the json data, is implied in the s-expression data. either format is fine, but it would be better to compare like for like:

            an s-expression equivalent for the json example would look like this:

                ((type   urn:ietf:params:acme:error:malformed)
                 (detail "Some of the identifiers requested were rejected)
                 (subproblems 
                   ((type   urn:ietf:params:acme:error:malformed)
                    (detail "Invalid underscore in DNS name \"_example.org\"")
                    (identifier
                      (type  dns)
                      (value _example.org)))
                   ((type   urn:ietf:params:acme:error:rejectedIdentifier)
                    (detail "This CA will not issue for \"example.net\"")
                    (identifier
                      (type  dns)
                      (value example.net)))))
            
            or the reverse, a json equivalent for the s-expression example:

                {
                  "urn:ietf:params:acme:error:malformed":
                  {
                    "detail":     "Some of the identifiers requested were rejected",
                    "subproblems": 
                    [
                      "urn:ietf:params:acme:error:malformed":
                      {
                        "detail":     "Invalid underscore in DNS name \"_example.org\"",
                        "identifier": 
                        {
                          "dns": "_example.org"
                        }
                      },
                      "urn:ietf:params:acme:error:rejectedIdentifier"            
                      {
                        "detail":     "This CA will not issue for \"example.net\"",
                        "identifier":
                        {
                          "dns": "example.net"
                        }
                      }
                    ]
                  }
                }
            
            a lot of the readability depends on the formatting. we could format the json example more dense:

                {"urn:ietf:params:acme:error:malformed": {
                  "detail":     "Some of the identifiers requested were rejected",
                  "subproblems": [
                    "urn:ietf:params:acme:error:malformed": {
                      "detail":     "Invalid underscore in DNS name \"_example.org\"",
                      "identifier": {
                        "dns": "_example.org" }},
                    "urn:ietf:params:acme:error:rejectedIdentifier": {
                      "detail":     "This CA will not issue for \"example.net\"",
                      "identifier": {
                        "dns": "example.net" }}]}}
            
            doing that shows that the main problem that makes json harder to read is the quotes around strings.

            because if we spread out the s-expression example:

                (urn:ietf:params:acme:error:malformed
                  (detail      "Some of the identifiers requested were rejected")
                  (subproblems
                    ((urn:ietf:params:acme:error:malformed
                       (detail     "Invalid underscore in DNS name \"_example.org\"")
                       (identifier 
                         (dns _example.org)
                       )
                     )
                     (urn:ietf:params:acme:error:rejectedIdentifier
                       (detail     "This CA will not issue for \"example.net\"")
                       (identifier
                         (dns example.net)
                       )
                     )
                    )
                  )
                )
            
            that doesn't add much to the readability. since, again, the primary win in readability comes from removing the quotes.

        • By eddythompson80 2025-05-241:051 reply

          > is far easier to read than this (the first JSON in RFC 8555):

          It's not for me. I'd literally take anything over csexps. Like there is nothing that I'd prefer it to. If it's the only format around, then I'll just roll my own.

          • By justinclift 2025-05-2415:05

            > Like there is nothing that I'd prefer it to.

            May I suggest perl regex's? :)

        • By NooneAtAll3 2025-05-2417:24

          > I contend that this is far easier to read than this

          oh boi, that's some Lisp-like vs C-like level of holywar you just uncovered there

          and wooow my opinion is opposite of yours

        • By remram 2025-05-2415:30

          This doesn't help with numbers at all, though. Any textual representation of numbers is going to have the same problem as JSON.

        • By michaelcampbell 2025-05-2415:08

          > is far easier to read than this

          Readability is a function of the reader, not the medium.

      • By lisper 2025-05-2418:151 reply

        > Canonical S-expressions are not as easy to read and much harder to write by hand

        You don't do that, any more than you read or write machine code in binary. You read and write regular S-expressions (or assembly code) and you translate that into and out of canonical S expressions (or machine code) with a tool (an assembler/disassembler).

        • By cortesoft 2025-05-2419:381 reply

          I have written by hand and read JSON hundreds of times. You can tell me I shouldn’t, but I am telling you I do. Messing around with an API with curl, tweaking a request object slightly for testing something, etc.

          Reading happens even more times. I am constantly printing out API responses when I am coding, verifying what I am seeing matches what I am expecting, or trying to get an idea of the structure of something. Sure, you can tell me I shouldn’t do this and I should just read a spec, but in my experience it is often much faster just to read the JSON directly. Sometimes the spec is outdated, just plain wrong, or doesn’t exist. Being able to read the JSON is a regular part of my day.

          • By lisper 2025-05-2419:56

            I think there may be a terminological disconnect here. S-expressions and canonical S-expressions are not the same thing. S-expressions (non-canonical) are a comparable to JSON, intended to be read and written by humans, and actually much easier to read and write than JSON because it uses less punctuation.

            https://en.wikipedia.org/wiki/S-expression

            A canonical S-expression is a binary format, intended to be both generated and parsed by machines, not humans:

            https://en.wikipedia.org/wiki/Canonical_S-expressions

      • By beeflet 2025-05-2417:44

        you can use a program to convert between s-expressions and a more readable format. In a world where canonical s-expressions rule, this "more readable format" would probably be an ordinary s-expression

    • By tsimionescu 2025-05-246:142 reply

      This seems like a just-so story. Your explanation could make some sense if we were comparing {"e" : "AQAB"} to {"e" : 65537}, but there is no reason why that should be the alternative. The JSON {"e" : "65537"} will be read precisely the same way by any JSON parser out there. Converting the string "65537" to the number 65537 is exactly as easy (or hard), but certainly unambiguous, as converting the string "AQAB" to the same number.

      Of course, if you're doing this in JS and have reasons to think the resulting number may be larger than the precision of a double, you have a huge problem either way. Just as you would if you were writing this in C and thought the number may be larger than what can fit in a long long. But that's true regardless of how you represent it in JSON.

      • By pornel 2025-05-2411:29

        For very big numbers (that could appear in these fields), generating and parsing a base 10 decimal representation is way more cumbersome than using their binary representation.

        The DER encoding used in the TLS certificates uses the big endian binary format. OpenSSL API wants the big endian binary too.

        The format used by this protocol is a simple one.

        It's almost exactly the format that is needed to use these numbers, except JSON can't store binary data directly. Converting binary to base 64 is a simple operation (just bit twiddling, no division), and it's easier than converting arbitrarily large numbers between base 2 and base 10. The 17-bit value happens to be an easy one, but other values may need thousands of bits.

        It would be silly for the sender and recipient to need to use a BigNum library when the sender has the bytes and the recipient wants the bytes, and neither has use for a decimal number.

      • By deepsun 2025-05-247:151 reply

        Some parsers, like PHP, may treat 65537 and "65537" the same. Room for vulnerability.

        • By int_19h 2025-05-247:482 reply

          Why would they do so? It's semantically distinct JSON, even JS itself treats it differently?

    • By ncruces 2025-05-2318:562 reply

      Go can decode numbers losslessly as strings: https://pkg.go.dev/encoding/json#Number

      json.Number is (almost) my “favorite” arbitrary decimal: https://github.com/ncruces/decimal?tab=readme-ov-file#decima...

      I'm half joking, but I'm not sure why S-expressions would be better here. There are LISPs that don't do arbitrary precision math.

      • By eadmund 2025-05-2413:14

        > Go can decode numbers losslessly as strings: https://pkg.go.dev/encoding/json#Number

        Yup, and if you’re using JSON in Go you really do need to be using Number exclusively. Anything else will lead to pain.

        > I'm half joking, but I'm not sure why S-expressions would be better here. There are LISPs that don't do arbitrary precision math.

        Sure, but I’m referring specifically to https://www.ietf.org/archive/id/draft-rivest-sexp-13.html, which only has lists and bytes, and so number are always just strings and it’s up to the program to interpret them.

      • By mise_en_place 2025-05-2321:11

        For actual SERDES, JSON becomes very brittle. It's better to use something like protobuf or cap'n'proto for such cases.

    • By marcosdumay 2025-05-2318:423 reply

      What I don't understand is why you (and a lot of other people) just expect S-expression parsers to not have the exact same problems.

      • By eadmund 2025-05-2321:063 reply

        Because canonical S-expressions don’t have numbers, just atoms (i.e., byte sequences) and lists. It is up to the using code to interpret "34" as the string "34" or the number 34 or the number 13,108 or the number 13,363, which is part of the protocol being used. In most instances, the byte sequence is probably a decimal number.

        Now, S-expressions as used for programming languages such as Lisp do have numbers, but again Lisp has bignums. As for parsers of Lisp S-expressions written in other languages: if they want to comply with the standard, they need to support bignums.

        • By tsimionescu 2025-05-246:18

          You can write JSON that exclusively uses strings, so this is not really relevant. Sure, maybe it can be considered an advantage that s-expressions force you to do that, though it can also be seen just as easily as a disadvantage. It certainly hurts readability of the format, which is not a 0-cost thing. This is also why all Lisps use more than plain sexps to represent their code: having different syntax for different types helps.

        • By its-summertime 2025-05-2321:56

          "it can do one of 4 things" sounds very much like the pre-existing issue with JSON

        • By motorest 2025-05-2417:41

          > Because canonical S-expressions don’t have numbers, just atoms (i.e., byte sequences) and lists.

          If types other than string and a list bother you, why don't you stick with those types in JSON?

      • By 01HNNWZ0MV43FF 2025-05-2319:071 reply

        I think they mean that Common Lisp has bigints by default

        • By ryukafalz 2025-05-2320:351 reply

          As do Scheme and most other Lisps I'm familiar with, and integers/floats are typically specified to be distinct. I think we'd all be better off if that were true of JSON as well.

          I'd be happy to use s-expressions instead :) Though to GP's point, I suppose we might then end up with JS s-expression parsers that still treat ints and floats interchangeably.

          • By petre 2025-05-244:28

            And in addition to that are unable to distingush between a string "42" and a number 42.

    • By josephg 2025-05-2412:262 reply

      The funny thing about this is that JavaScript the language has had support for BigIntegers for many years at this point. You can just write 123n for a bigint of 123.

      JSON could easily be extended to support them - but there’s no standards body with the authority to make a change like that. So we’re probably stuck with json as-is forever. I really hope something better comes along that we can all agree on before I die of old age.

      While we’re at it, I’d also love a way to embed binary data in json. And a canonical way to represent dates. And comments. And I’d like a sane, consistent way to express sum types. And sets and maps (with non string keys) - which JavaScript also natively supports. Sigh.

      • By aapoalas 2025-05-2415:35

        It's more a problem of support and backwards compatibility. JSON and parsers for it are so ubiquitous, and the spec completely lacks any versioning support, that adding a feature would be a breaking change of horrible magnitude, on nearly all levels of the modern software infrastructure stack. I wouldn't be surprised if some CPUs might break from that :D

        JSON is a victim of its success: it has become too big to fail, and too big to improve.

      • By Sammi 2025-05-2420:061 reply

        There are easy workarounds to getting bigints in JSON: https://github.com/GoogleChromeLabs/jsbi/issues/30#issuecomm...

        • By josephg 2025-05-2511:001 reply

          Sure; and I can encode maps and sets as entry lists. Binary data as strings and so on. But I don’t want to. I shouldn’t have to.

          The fact remains that json doesn’t have native support for any of this stuff. I want something json-like which supports all this stuff natively. I don’t want to have to figure out if some binary data is base64 encoded or hex encoded or whatever, and hack around jackson or serde or javascript to encode and decode my objects properly. Features like this should be built in.

          • By Sammi 2025-05-2511:241 reply

            Agree. JSON definitely needs an update so we can get better ergonomics built in.

            In code you control you can choose to use JSON5: https://json5.org/

            • By josephg 2025-05-2512:34

              Cool. Pity it still doesn’t support bigint, binary data, maps, sets, non-string keys or dates though.

    • By kangalioo 2025-05-2319:016 reply

      But what's wrong with sending the number as a string? `"65537"` instead of `"AQAB"`

      • By comex 2025-05-2319:521 reply

        The question is how best to send the modulus, which is a much larger integer. For the reasons below, I'd argue that base64 is better. And if you're sending the modulus in base64, you may as well use the same approach for the exponent sent along with it.

        For RSA-4096, the modulus is 4096 bits = 512 bytes in binary, which (for my test key) is 684 characters in base64 or 1233 characters in decimal. So the base64 version is much smaller.

        Base64 is also more efficient to deal with. An RSA implementation will typically work with the numbers in binary form, so for the base64 encoding you just need to convert the bytes, which is a simple O(n) transformation. Converting the number between binary and decimal, on the other hand, is O(n^2) if done naively, or O(some complicated expression bigger than n log n) if done optimally.

        Besides computational complexity, there's also implementation complexity. Base conversion is an algorithm that you normally don't have to implement as part of an RSA implementation. You might argue that it's not hard to find some library to do base conversion for you. Some programming languages even have built-in bigint types. But you typically want to avoid using general-purpose bigint implementations for cryptography. You want to stick to cryptographic libraries, which typically aim to make all operations constant-time to avoid timing side channels. Indeed, the apparent ease-of-use of decimal would arguably be a bad thing since it would encourage implementors to just use a standard bigint type to carry the values around.

        You could argue that the same concern applies to base64, but it should be relatively safe to use a naive implementation of base64, since it's going to be a straightforward linear scan over the bytes with less room for timing side channels (though not none).

        • By nssnsjsjsjs 2025-05-249:26

          Ah OK so: readable, efficient, consistent; pick 2.

      • By shiandow 2025-05-248:431 reply

        Converting large integers to decimal is nontrivial, especially when you don't trust languages to handle large numbers.

        Why you wouldn't just use the hexadecimal that everyone else seems to use I don't know. There seems to be a rather arbitrary cutoff where people prefer base64 to hexadecimal.

        • By chipsa 2025-05-2515:38

          Size: base 64 is 2/3 the number of bytes as hex.

      • By red_admiral 2025-05-247:53

        This sounds like an XY problem to me. There is already an alternative that is at least as secure and only requires a single base-64 string: Ed25519.

      • By deepsun 2025-05-247:191 reply

        PHP (at least old versions I worked with) treats "65537" and 65537 similarly.

        • By red_admiral 2025-05-248:05

          That sounds horrible if you want to transmit a base64 string where the length is a multiple of 3 and for some cursed reason there's no letters or special characters involved. If "7777777777777777" is your encoded string because you're sending a string of periods encoded in BCD, you're going to have a fun time. Perhaps that's karma for doing something braindead in the first place though.

      • By foobiekr 2025-05-2320:48

        Cost.

      • By ayende 2025-05-2319:221 reply

        Too likely that this would not work because silent conversion to number along the way

        • By iforgotpassword 2025-05-2319:30

          Then just prefixing it with an underscore or any random letter would've been fine but of course base64 encoding the binary representation in base 64 makes you look so much smarter.

    • By JackSlateur 2025-05-2320:533 reply

      Is this ok ?

        Python 3.13.3 (main, May 21 2025, 07:49:52) [GCC 14.2.0] on linux
        Type "help", "copyright", "credits" or "license" for more 
       information.
        >>> import json
        >>>
       
       json.loads('47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
       47234762761726473624762746721647624764380000000000000000000000000000000000000000000

      • By teddyh 2025-05-2321:41

        I prefer

          >> import json, decimal
          >> j = "47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
          >> json.loads(j, parse_float=decimal.Decimal, parse_int=decimal.Decimal)
          Decimal('47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
        
        This way you avoid this problem:

          >> import json
          >> j = "0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
          >> json.loads(j)
          0.47234762761726473
        
        And instead can get:

          >> import json, decimal
          >> j = "0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
          >> json.loads(j, parse_float=decimal.Decimal, parse_int=decimal.Decimal)
          Decimal('0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000')

      • By sevensor 2025-05-2410:23

        Just cross your fingers and hope for the best if your data is at any point decoded by a json library that doesn’t support bigints? Python’s ability to handle them is beside the point of they get mangled into ieee754 doubles along the way.

      • By jazzyjackson 2025-05-2321:021 reply

        yes, python falls into the sane language category with arbitrary-precision arithmetic

        • By faresahmed 2025-05-241:291 reply

          Not so much,

              >>> s="1"+"0"*4300
              >>> json.loads(s)
              ...
              ValueError: Exceeds the limit (4300 digits) for integer string conversion: 
              value has 4301 digits; use sys.set_int_max_str_digits() to increase the limit
          
          This was done to prevent DoS attacks 3 years ago and have been backported to at least CPython 3.9 as it was considered a CVE.

          Relevant discussion: https://news.ycombinator.com/item?id=32753235

          Your sibling comment suggests using decimal.Decimal which handles parsing >4300 digit numbers (by default).

          • By lifthrasiir 2025-05-244:37

            This should be interpreted as a stop-gap measure before a subquadratic algorithm can be adopted. Take a look at _pylong.py in new enough CPython.

    • By drob518 2025-05-2312:54

      Seems like a large integer can always be communicated as a vector of byte values in some specific endian order, which is easier to deal with than Base64 since a JSON parser will at least convert the byte value from text to binary for you.

      But yea, as a Clojure guy sexprs or EDN would be much better.

    • By tempodox 2025-05-2318:36

      “Worse is better” is still having ravaging success.

    • By em-bee 2025-05-2316:26

      as someone who started the s-expression task on rosettacode.org, i approve. if you need an s-expression parser for your language, look here https://rosettacode.miraheze.org/wiki/S-expressions (the canonical url is https://rosettacode.org/wiki/S-expressions but they have DNS issues right now)

    • By zubspace 2025-05-2411:142 reply

      Wouldn't it just solve a whole lot of problems if we could just add optional type declarations to json? It seems so simple and obvious that I'm kinda dumbfounded that this is not a thing yet. Most of the time you would not need it, but it would prevent the parser from making a wrong guess in all those edge cases.

      Probably there are types not every parser/language can accept, but at least it could throw a meaningful error instead of guessing or even truncating the value.

      • By ivanbakel 2025-05-2411:32

        I doubt that would fix the issue. The real cause is that programmers mostly deal in fixed-size integers, and that’s how they think of integer values, since those are the concepts their languages provide. If you’re going to write a JSON library for your favourite programming language, you’re going to reach for whatever ints are the default, regardless of what the specs or type hints suggest.

        Haskell’s Aeson library is one of the few exceptions I’ve seen, since it only parses numbers to ‘Scientific’s (essentially a kind of bigint for rationals.) This makes the API very safe, but also incredibly annoying to use if you want to just munge some integers, since you’re forced to handle the error case of the unbounded values not fitting in your fixed-size integer values.

        Most programmers likely simply either don’t consider that case, or don’t want to have to deal with it, so bad JSON libraries are the default.

      • By movpasd 2025-05-2412:00

        This is actually a deliberate design choice, which the breathtakingly short JSON standard explains quite well [0]. The designers deliberately didn't introduce any semantics and pushes all that to the implementors. I think this is a defensible design goal. If you introduce semantics, you're sure to annoy someone.

        There's an element of "worse is better" here [1]. JSON overtook XML exactly because it's so simple and solves for the social element of communication between disparate projects with wildly different philosophies, like UNIX byte-oriented I/O streams, or like the C calling conventions.

        ---

        [0] https://ecma-international.org/publications-and-standards/st...

        [1] https://en.wikipedia.org/wiki/Worse_is_better

    • By supermatt 2025-05-245:421 reply

      As you said - it’s not really a problem with the JSON structure and format itself, but the underlying parser, which is specifically designed to map to the initial js types. There are parsers that don’t have this problem, but then the JSON itself is not portable.

      The problem with your solution is that it’s also not portable for the same reason (it’s not part of the standard), and the reason that it wasn’t done that way in the first place is because it wouldn’t map to those initial js types!

      FYI, you can easily work around this by using replacer and revivers that are part of the standards for stringify and parse and treat numbers differently. But again, the json isn’t portable to places without those replacer/revivers.

      I.e, the real problem is treating something that looks like json as json by using standards compliant json parsers - not the apparent structure of the format itself. You could fix this problem in an instant by calling it something other than JSON, but people will see it and still use a JSON parser because it looks like JSON, not because it is JSON.

      • By zelphirkalt 2025-05-2412:59

        Isn't the actual problem that it is supposed to map to JS types, which are badly designed, and thus being infectious for other ecosystems, that don't have these defects?

    • By mnahkies 2025-05-247:09

      I'm still haunted by a bug caused by the JSON serializer our C# apps were using emitting bigints as JSON numbers, only for the JavaScript consumers to mangle them silently.

      Kinda blows my mind that the accepted behavior is to just overflow and not raise an exception.

      I try to stick to strings for anything that's not a 32 bit int now.

    • By TZubiri 2025-05-2322:121 reply

      It feels like malpractice to use json in encryption

      • By red_admiral 2025-05-248:09

        Sadly JWT and friends are "standard". In theory the representation and the data are independent and you can marshal and unmarshal correctly.

        In practice, "alg:none" is a headache and everyone involved should be ashamed.

    • By mindcrime 2025-05-2323:131 reply

        JSON is better than XML, but it really isn’t great. 
      
      JSON doesn't even support comments, c'mon. I mean, it's handy for some things, but I don't know if I'd say "JSON is better than XML" in any universal sense. I still go by the old saw "use the right tool for the job at hand". In some cases maybe it's JSON. In others XML. In others S-Exprs encoded in EBCDIC or something. Whatever works...

      • By deepsun 2025-05-2419:14

        Yup, imagine if HTML was JSON-like, not XML-like.

    • By fulafel 2025-05-248:54

      Is the correct number implementation really the exception? The first 2 json decoders I just tried (Python & Clojure) worked correctly with that example.

    • By rendaw 2025-05-249:20

      Canonical S-expressions don't have an object/mapping type, which means you can't have generic tooling unambiguously perform certain common operations like data merges.

    • By matja 2025-05-2312:552 reply

      Aren't JSON parsers technically not following the standard if they don't reliably store a number that is not representable by a IEEE754 double precision float?

      It's a shame JSON parsers usually default to performance rather than correctness, by using bignums for numbers.

      • By q3k 2025-05-2313:221 reply

        Have a read through RFC7159 or 8259 and despair.

        > This specification allows implementations to set limits on the range and precision of numbers accepted

        JSON is a terrible interoperability standard.

        • By matja 2025-05-2313:522 reply

          So a JSON parser that cannot store a 2 is technically compliant? :(

          • By reichstein 2025-05-2320:422 reply

            JSON is a text format. A parser must recognize the text `2` as a valid production of the JSON number grammar.

            Converting that text to _any_ kind of numerical value is outside the scope of the specification. (At least the JSON.org specification, the RFC tries to say more.)

            As a textural format, when you use it for data interchange between different platforms, you should ensure that the endpoints agree on the _interpretation_, otherwise they won't see the same data.

            Again outside of the scope of the JSON specification.

            • By deepsun 2025-05-2419:39

              The more a format restricts, the more useful it is. E.g. if a format allows pretty much anything and it's up to parsers to accept or reject it, we may as well say "any text file" (or even "any data file") -- it would allow for anything.

              Similarly to a "schema-less" DBMS -- you will still have a schema, it will just be in your application code, not enforced by the DBMS.

              JSON is a nice balance between convenience and restrictions, but it's still a compromise.

            • By tsimionescu 2025-05-246:27

              A JSON parser has to check if a numeric value is actually numeric - the JSON {"a" : 123456789} is valid, but {"a" : 12345678f} is not. Per the RFC, a standards-compliant JSON parser can also refuse {"a": 123456789} if it considers the number is too large.

          • By q3k 2025-05-2314:042 reply

            Yep. Or one that parses it into a 7 :)

            • By kevingadd 2025-05-2319:181 reply

              I once debugged a production issue that boiled down to "A PCI compliance .dll was messing with floating point flags, causing the number 4 to unserialize as 12"

              • By xeromal 2025-05-2320:46

                That sounds awful. lol

            • By chasd00 2025-05-2319:41

              > Or one that parses it into a 7 :)

              if it's known and acceptable that LLMs can hallucinate arguments to an API then i don't see how this isn't perfectly acceptable behavior either.

      • By kens 2025-05-2319:543 reply

        > Aren't JSON parsers technically not following the standard if they don't reliably store a number that is not representable by a IEEE754 double precision float?

        That sentence has four negations and I honestly can't figure out what it means.

        • By alterom 2025-05-2413:241 reply

          >> Aren't JSON parsers technically not following the standard if they don't reliably store a number that is not representable by a IEEE754 double precision float?

          >That sentence has four negations and I honestly can't figure out what it means.

          This example is halfway as bad as the one Orwell gives in my favorite essay, "Politics the the English Language"¹.

          Compare and contrast:

          >I am not, indeed, sure whether it is not true to say that the Milton who once seemed not unlike a seventeenth-century Shelley had not become, out of an experience ever more bitter in each year, more alien (sic) to the founder of that Jesuit sect which nothing could induce him to tolerate.

          Orwell has much to say about either.

          _____

          ¹https://www.orwellfoundation.com/the-orwell-foundation/orwel...

          • By NooneAtAll3 2025-05-2417:441 reply

            that Orwell quote can be saved a lot by proper punctuation

            I am not, indeed, sure*,* whether it is not true to say that the Milton *(*who once seemed not unlike a seventeenth-century Shelley*)* had not become *-* out of an experience *-* ever more bitter in each year, more alien (sic) to the founder of that Jesuit sect*,* which nothing could induce him to tolerate.

            • By alterom 2025-06-027:27

              Has the proper punctuation allowed you to see that there's an extra negation there that makes the sentence say the exact opposite of what the author intended it to say?

        • By umanwizard 2025-05-2323:03

          “The standard technically requires that JSON parsers reliably store numbers, even those that are not representable by an IEEE double”.

          (It seems this claim is not true, but at least that’s what the sentence means.)

        • By NooneAtAll3 2025-05-2417:39

          Aren't {X}? -> isn't it true that {X}?

          {X} = JSON parsers technically [are] not following the standard if {reason}

          {reason} = [JSON parsers] don't reliably store a number that {what kind of number?}

          {what kind of number} = number that is not representable by a IEEE754 double precision float

          seems simple

    • By ownedthx 2025-05-2415:21

      The numerical issues here are due to JavaScript, not JSON.

    • By rr808 2025-05-249:471 reply

      > JSON is better than XML

      hard disagree on that one.

    • By llm_nerd 2025-05-2415:58

      >JSON is better than XML

      JSON is still hack garbage compared to XML from the turn of the millennia. Like most dominant tech standards, JSON took hold purely because many developers are intellectually lazy and it was easier to slam some sloppy JSON together than to understand XML.

      XML with XSD, XPath and XQuery is simply a glorious combination.

  • By mcpherrinm 2025-05-2320:563 reply

    I’m the technical lead for the Let’s Encrypt SRE/infra team. So I spend a lot of time thinking about this.

    The salt here is deserved! JSON Web Signatures are a gnarly format, and the ACME API is pretty enthusiastic about being RESTful.

    It’s not what I’d design. I think a lot of that came via the IETF wanting to use other IETF standards, and a dash of design-by-committee.

    A few libraries (for JWS, JSON and HTTP) go a long way to making it more pleasant but those libraries themselves aren’t always that nice, especially in C.

    I’m working on an interactive client and accompanying documentation to help here too, because the RFC language is a bit dense and often refers to other documents too.

    • By cryptonector 2025-05-2321:202 reply

      > JSON Web Signatures are a gnarly format

      They are??

      As someone who wallows in ASN.1, Kerberos, and PKI, I don't find JWS so "gnarly". Even if you're open-coding a JSON Web Signature it will be easier than to open-code S/MIME, CMS, Kerberos, etc. Can you explain what is so gnarly about JWS?

      Mind you, there are problems with JWT. Mainly that HTTP user-agents don't know how to fetch the darned things because there is not standard for how to find out how to fetch the darned things, when you should honor a request for them, etc.

      • By mcpherrinm 2025-05-2417:171 reply

        I'd take ASN.1/DER over JWS any day :) It's the weekend and I don't feel I have the energy to launch a full roast of JWS, but to give some flavour, I'll link

        https://auth0.com/blog/critical-vulnerabilities-in-json-web-...

        Implementations can be written securely, but it's too easy to make mistakes.

        Yeah, there's worse stuff from the 90s around, but JOSE and ACME is newer than that - we could have done better!

        Alas, it's not changing now.

        I think ASN.1 has some warts, but I think a lot of the problems with DER are actually in creaky old tools. People seem way happier with Protobuf, for example: I think that's largely down to tooling.

        • By cryptonector 2025-05-2421:18

          The whole not validating the signatures thing is a problem, yes. That can happen with PKI certificates too, but those have been around longer and -perhaps because one needed an ASN.1 stack- only people with more experience wrote PKI stacks than we see in the case of JWS?

          I think Protocol Buffers is a disaster. Its syntax is worse than ASN.1 because you're required to write in tags, and it is a TLV encoding very similar to DER so... why _why_ does PB exist? Don't tell me it's because there were no ASN.1 tools around -- there were no PB tools around either!

      • By asimops 2025-05-2411:551 reply

        Don't you think you are falling for classic whataboutism here?

        Just because ASN.1 and friends are exceptionally bad, it does not mean that Json Web * cannot be bad also.

        • By cryptonector 2025-05-2416:13

          > Don't you think you are falling for classic whataboutism here?

          I do not. This sort of codec complexity can't be avoided. And ASN.1 is NOT "exceptionally bad" -- I rather like ASN.1. The point was not "wait till you see ASN.1", but "wait till you see Kerberos" because Kerberos requires a very large amount of client-side smarts -- too much really because it's got more than 30 years of cruft.

    • By dwedge 2025-05-2321:092 reply

      What is she talking about that you have to pay for certs if you want more than 3? Am I about to get a bill for the past 5 years or did she just misunderstand?

      • By belorn 2025-05-2321:261 reply

        to quote the article (or rather, the 2023 article which is the one mentioning the number 3).

        "Somehow, a couple of weeks ago, I found this other site which claimed to be better than LE and which used relatively simple HTTP requests without a bunch of funny data types."

        "This is when the fine print finally appeared. This service only lets you mint 90 day certificates on the free tier. Also, you can only do three of them. Then you're done. 270 days for one domain or 3 domains for 90 days, and then you're screwed. Isn't that great? "

        She don't mention what this "other site" is.

        • By jchw 2025-05-2321:436 reply

          FWIW, it is ZeroSSL. I want there to be more major ACME providers than just LE, but I'm not sure about ZeroSSL, personally. It seems to have the same parent company as IdenTrust (HID Global Corporation). Probably a step up from Honest Achmed but recently I recall people complaining that their EV code signing certificates were not actually trusted by Windows which is... Interesting.

          • By killjoywashere 2025-05-242:55

            IdenTrust participates in the US Federal PKI ecosystem, so they likely have strong incentives to charge exorbitantly. Those free certs are probably meant to facilitate development of gov-specific capabilities by random subcontractors long enough to figure out how to structure a contract mod that passes the anticipated cost onto the government.

            Don’t hate the player, hate the game.

          • By AStonesThrow 2025-05-246:181 reply

            > Honest Achmed

            I had to stop and Google that, wondering if it was a pastiche of “Akbar & Jeff’s Certificate Hut”...

            https://bugzilla.mozilla.org/show_bug.cgi?id=647959

            • By jchw 2025-05-2417:26

              I'm glad to give you an xkcd 1053 moment. Honest Achmed is one for the books.

          • By arccy 2025-05-2413:081 reply

            Google's CA offers them for free via ACME https://pki.goog/

            • By jchw 2025-05-2416:49

              That's pretty cool, though it does seem that you need to authenticate with a GCP account. A little bit less convenient. I do think there are actually a few other providers of ACME out there that require registration beforehand, ZeroSSL actually offers it without pre-registration like Let's Encrypt.

          • By birktj 2025-05-2420:22

            Buypass provides ACME certificates as well [1]. The usage limits are not quite as generous as LE, but they work pretty well in my experience.

            [1] https://www.buypass.com/products/tls-ssl-certificates/read-m...

          • By rmetzler 2025-05-2415:351 reply

            A while ago I saw that acme.sh now uses ZeroSSL by default.

            https://github.com/acmesh-official/acme.sh/blob/42bbd1b44af4...

            • By _hyn3 2025-05-2417:391 reply

              "We now have another confirmation on Twitter that remote code is executed and a glimpse into what the script is... it appears to be benign."

              https://github.com/acmesh-official/acme.sh/issues/4659

              It was not. Don't use acme.sh.

              • By rsync 2025-05-2419:59

                I went down the acme/HiCA/RCE rabbit hole a year or so ago and, while I don't remember the specifics, my feeling was that the RCE was not that dangerous and was put into place by greedy scammers thwarting the rules of cert (re)selling and not by shadowy actors trying to infiltrate sensitive infra ...

                Is there new information ? Was my impression wrong ?

          • By nickf 2025-05-246:311 reply

            ZeroSSL is owned by Identrust, but the infra is operated by another CA. Also Microsoft killed EV codesigning early last year - not stopping it working, just making it identical to ‘normal’ codesigning certs.

            • By mkup 2025-05-2411:282 reply

              Could you please provide more info on this topic, e.g. a link? I intended to buy EV code signing certificate as a sole proprietor to fix long-standing problem with my software when Windows Defender pops up every time I release a new version. Is EV code signing certificate no longer a viable solution to this problem? Is there no longer a difference between EV and non-EV code signing certificate?

    • By tasuki 2025-05-2416:412 reply

      > and the ACME API is pretty enthusiastic about being RESTful

      Without looking at it, are you sure about that?

      I once used to know what REST meant. Are you doing REST as in HATEOAS or as in "we expose some http endpoints"?

      • By mcpherrinm 2025-05-2418:521 reply

        Everything is an object, identified by a URL. You start from a single URL (the directory), and you can find all the rest of the resources from URLs provided from there.

        ACME models everything as JSON objects, each of which is identified by URL. You can GET them, and they link to other objects with Location and Link headers.

        To quote from the blog post:

        > Dig around in the headers of the response, looking for one named "Location". Don't follow it like a redirection. Why would you ever follow a Location header in a HTTP header, right? Nope, that's your user account's identifier! Yes, you are a URL now.

        I don't know if it's the pure ideal of HATEOS, but it's about as close as I've seen in use.

        It has the classic failing though: it’s used by scripts which know exactly what they want to do (get a cert), so the clients still hardcode the actions they need. It just adds a layer of indirection as they need to keep track of URLs.

        I would have preferred if it was just an RPC-over-HTTP/JSON with fixed endpoints and numeric object IDs.

        • By tasuki 2025-05-2419:53

          That's pretty good! Better than 99% claims of REST for sure! Thanks for the long reply.

      • By peanut-walrus 2025-05-2418:171 reply

        REST has for a long long time meant "rpc via json over http". HATEOAS is a mythical beast nobody has ever seen in the wild.

        • By hamburglar 2025-05-2420:02

          Eh, I think that’s what it meant for a while. I’ve now interacted with enough systems that have rigor about representing things as resources that have GET urls and doing writes with POST etc that I don’t think it’s always the ad hoc RPC fest it once was. It may be rare to see according-to-hoyle HATEOAS but REST is definitely no longer in the “nobody actually does this” category.

  • By 1a527dd5 2025-05-2311:413 reply

    I don't understand the tone of aggression against ACME and their plethora of clients.

    I know it isn't a skill issue because of who the author is. So I can only imagine it is some sort of personal opinion that they dislike ACME as a concept or the tooling around ACME in general.

    We've been using LE for a while (since 2019 I think) for handful of sites, and the best nonsense client _for us_ was https://github.com/do-know/Crypt-LE/releases.

    Then this year we've done another piece of work this time against the Sectigo ACME server and le64 wasn't quite good enough.

    So we ended up trying:-

    - https://github.com/certbot/certbot on GitHub Actions, it was fine but didn't quite like the locked down environment

    - https://github.com/go-acme/lego huge binary, cli was interestingly designed and the maintainer was quite rude when raising an issue

    - https://github.com/rmbolger/Posh-ACME our favourite, but we ended up going with certbot on GHA once we fixed the weird issues around permissions

    Edit* Re-read it. The tone isn't aimed at the ACME or the clients. It's the spec itself. ACME idea good, ACME implementation bad.

    • By lucideer 2025-05-2312:164 reply

      > I don't understand the tone of aggression against ACME and their plethora of clients.

      > ACME idea good, ACME implementation bad.

      Maybe I'm misreading but it sounds like you're on a similar page to the author.

      As they said at the top of the article:

      > Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.

      This might seem harsh but when I think it's a pretty fair perspective to have when running security-sensitive processes.

      • By thayne 2025-05-243:161 reply

        No the author seems opposed to the idea specification of ACME, not just the implementation of the clients.

        And a lot of the complaints ultimately boil down to not liking JWS. And I'm not really sure what she would have preferred there. ASN.1, which is even more complicated? Some bespoke format where implementations can't make use of existing libraries?

        • By imtringued 2025-05-249:20

          This is exactly the impression I got here.

          I would have had sympathy for the disdain for certbot, but certbot wasn't called out and that isn't what the blog post is about at all.

      • By dangus 2025-05-2321:541 reply

        I disagree, the author is overcomplicating and overthinking things.

        She doesn't "trust" tooling that basically the entire Internet including major security-conscious organizations are using, essentially letting perfect get in the way of good.

        I think if she were a less capable engineer she would just set that shit up using the easiest way possible and forget about it like everyone else, and nothing bad would happen. Download nginx proxy manager, click click click, boom I have a wilcard cert, who cares?

        I mean, this is her https site, which seems to just be a blog? What type of risk is she mitigating here?

        Essentially the author is so skilled that she's letting perfect get in the way of good.

        I haven't thought about certificates for years because it's not worth my time. I don't really care about the tooling, it's not my problem, and it's never caused a security issue. Put your shit behind a load balancer and you don't even need to run any ACME software on your own server.

        • By nothrabannosir 2025-05-244:161 reply

          Sometimes I wonder how y’all became programmers. I learned basically everything by SRE-larping on my shitty nobody-cares-home-server for years and suddenly got paid to do it for real.

          Who do you think they hire to manage those LBs for you? People who never ran any ACME software, or people who have a blog post turning over every byte of JSON in the protocol in excruciating detail?

          • By dangus 2025-05-252:34

            Our backgrounds sound similar. I just don’t sweat all those details when I set things up.

            I’m not advocating for the use of cloud services necessarily, not saying we all need to allow someone else to abstract away everything. And I realize that someone on an ops team has to actually set that up at a low level at some point.

            What I am saying is that there’s a lot of open source software that has already invented the wheel for you. You can run it easily and be reasonably assured that it’s safe enough to be exposed to the internet.

            I gave the example of nginx proxy manager. It may be basic software but for a personal blog it’ll get the job done and you can set it up almost entirely in a GUI following a simple YouTube tutorial. It’ll get you an wildcard certificate automatically, and it’ll be secure enough.

      • By dwedge 2025-05-2321:121 reply

        This is the same author that threw everyone into a panic about atop and turned out to not really have found anything.

        • By ezekiel68 2025-05-2413:02

          Agreed and -- in particular -- I don't recall seeing any kind of "everybody get back into the pool" follow-up after the developers of atop quickly addressed the issue with an update. At least not any kind of follow-up that got the same kind of press as the initial alarm.

      • By giancarlostoro 2025-05-2312:335 reply

        Im not a container guru by any means (at least not yet?) but would docker not suffice these concerns?

        • By fpoling 2025-05-2312:583 reply

          The issue is that the client needs to access the private key, tell web server where various temporary files are during the certificate generation (unless the client uses DNS mode) and tell the web server about a new certificate to reload.

          To implement that many clients run as a root. Even if that root is in a docket container, this is needlessly elevated privileges especially given the complexity (again, needless) of many clients.

          The sad part is that it is trivial to run most of the clients with an account with no privileges that can access very few files and use a unix socket to tell the web server to reload the certificate. But this is not done.

          And then ideally at this point the web servers should if not implement then at least facilitate ACME protocol implementations, like, for example, redirect traffic requests from acme servers to another port with one-liner in config. But this is not the case.

          • By ptx 2025-05-2318:36

            Apache comes with built-in ACME support. Just enable the mod_md module: https://httpd.apache.org/docs/2.4/mod/mod_md.html

          • By tialaramex 2025-05-2323:121 reply

            But the requirements you listed aren't actually requirements of ACME, they're lazy choices you could make but they aren't necessary. Some clients do better.

            For example the client needs a Certificate Signing Request, one way to achieve that is to either have the client choose the private keys or give it access to a chosen key, but the whole point of a CSR is that you don't need the private key, the CSR can be made by another system, including manually by a human and it can even be re-used repeatedly so that you don't need new ones until you decide to replace your keys.

            Yes, if we look back at my hopes when Let's Encrypt launched we can be very disappointed that although this effort was a huge success almost all the server vendors continued to ship garbage designed for a long past era where HTTPS is a niche technology they barely support.

            • By toast0 2025-05-245:151 reply

              I don't know that it's accurate, but at the beginning, it felt like using certbot was the only supported way to use ACME/LE, and it really wanted to do stuff as root and restart your webserver whenever.

              Or you could run Caddy which had a built in ACME client, but then you're running an extra daemon.

              apache_mod_md eventually came along which works for me, but it's also got some lazy things (it mostly just manages requesting certs, you've got to have a frequent enough reload to pick them up; I guess that's ok because I don't think public Apache ever learned to periodically check if it needs to reopen access logs when they're rotated, so you probably reload Apache from time to time anyway)

              Before that was workable, I did need some certs and used acme.sh by hand, and it was nicer than trusting a big thing running in a cron and restarting things, but it was also inconvenient becsause I had to remember to go do it.

              • By tialaramex 2025-05-2412:42

                > I don't know that it's accurate, but at the beginning, it felt like using certbot was the only supported way to use ACME/LE, and it really wanted to do stuff as root and restart your webserver whenever.

                It's fair to say that on day one the only launch client was Certbot, although on that day it wasn't called "Certbot" yet so if that's the name you remember it wasn't the only one. Given that it's not guaranteed this will be a success (like the American Revolution, or the Harry Potter books it seems obvious in hindsight but that's too late) it's understandable that they didn't spend lots of money developing a variety of clients and libraries you might want.

          • By GoblinSlayer 2025-05-2316:43

            It's cheap. If the client was done today, it would be based on AI.

        • By rsync 2025-05-2314:08

          Yes, it does.

          I run acme in a non privileged jail whose file system I can access from outside the jail.

          So acme sees and accesses nothing and I can pluck results out with Unix primitives from the outside.

          Yes, I use dns mode. Yes, my dns server is also a (different) jail.

        • By TheNewsIsHere 2025-05-2312:56

          My reading of the article suggested to me that the author took exception to the code that touched the keying material. Docker is immaterial to that problem. I won’t deign to speak for Rachel By The Bay (mother didn’t raise a fool, after all), but I expect Docker would be met with a similar regard.

          Which I do understand. Although I use Docker, I mainly use it personally for things I don’t want to spend much time on. I don’t really like it over other alternatives, but it makes standing up a lab service stupidly easy.

        • By lucideer 2025-05-2316:461 reply

          I use docker for the same reasons as the author's reservations - I combine a docker exec with some of my own loose automation around moving & chmod-ing files & directories to obviate the need for the acme client to have unfettered root access to my system.

          Whether it's a local binary or a dockerised one, that access still needs to be marshalled either way & it can get complex facilitating that with a docker container. I haven't found it too bad but I'd really rather not need docker for on-demand automations.

          I give plenty* of services root access to my system, most of which I haven't written myself & I certainly haven't audited their code line-by-line, but I agree with the author that you do get a sense from experience of the overall hygiene of a project & an ACME client has yet to give me good vibes.

          * within reason

          • By paul_h 2025-05-247:331 reply

            Copilot suggests:

                docker run --rm \
                  -v /srv/mywebsite/certs:/acme.sh/certs \
                  -v /srv/mywebsite/public/.well-known/acme- 
                  challenge:/acme-challenge \
                  neilpang/acme.sh --issue \
                  --webroot /acme-challenge \
                  -d yourdomain.com \
                  --cert-file /acme.sh/certs/cert.pem \
                  --key-file /acme.sh/certs/key.pem \
                  --fullchain-file /acme.sh/certs/fullchain.pem
            
            I don't know why it's suggesting `neilpang` though, as he no longer has a fork.

            • By lucideer 2025-05-248:26

              Yeah I'm not running anything llms spit at me in a security-sensitive context.

              That example is not so bad - you've already pointed out the main obvious supply-chain attack vector in referencing a random ephemeral fork, but otherwise it's certonly (presumably neil's default) so it's the simplest case. Many clients have more... intrusive defaults that prioritise first-run cert onboarding which is opening more surface area for write error.

    • By diggan 2025-05-2312:14

      > I don't understand the tone of aggression against ACME and their plethora of clients.

      The older posts on the same website provided a bit more context for me to understand today's post better:

      - "Why I still have an old-school cert on my https site" - January 3, 2023 - https://rachelbythebay.com/w/2023/01/03/ssl/

      - "Another look at the steps for issuing a cert" - January 4, 2023 - https://rachelbythebay.com/w/2023/01/04/cert/

    • By immibis 2025-05-2311:475 reply

      Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.

      Sadly, security is a cat and mouse game, which means it's always evolving and you're forced to keep up - and it's inherent by the nature of the field, so we can't really blame anyone (unlike, say, being forced to integrate with the latest Google services to be allowed on the Play Store). At least you get to write your own ACME client if you want to. You don't have to use certbot, and there's no TPM-like behaviour locking you out of your own stuff.

      • By g-b-r 2025-05-2312:211 reply

        > Some people don't want to be forced to run a bunch of stuff they don't understand on the server

        It's not just about not understanding, it's that more complex stuff is inherently more prone to security vulnerabilities, however well you think you reviewed its code.

        • By Avamander 2025-05-2312:231 reply

          > It's that more complex stuff is inherently more prone to security vulnerabilities

          That's overly simplifying it and ignores the part where the simple stuff is not secure to begin with.

          In the current context you could take a HTTP client with a formally verified TLS stack, would you really say it's inherently more vulnerable than a barebones HTTP client talking to a server over an unencrypted connection? I'd say there's a lot more exposed in that barebones client.

          • By g-b-r 2025-05-2314:291 reply

            The alternative of the article was ACME vs other ways of getting TLS certificates, not https vs http.

            Of course plain http would be, generally, much more dangerous than a however complex encrypted connection

            • By g-b-r 2025-05-272:32

              Why the heck was this downvoted

      • By tptacek 2025-05-2318:481 reply

        Non-ACME certs are basically over. The writing has been on the wall for a long time. I understand people being squeamish about it; we fear change. But I think it's a hopeful thing: the Web PKI is evolving. This is what that looks like: you can't evolve and retain everyone's prior workflows, and that has been a pathology across basically all Internet security standards work for decades.

        • By ipdashc 2025-05-2319:564 reply

          ACME is cool (compared to what came before it), but I'm kind of sad that EV certs never seemed to pan out at all. I feel like they're a neat concept, and had the potential to mitigate a lot of scams or phishing websites in an ideal world. (That said, discriminating between "big companies" and "everyone else who can't afford it" would definitely have some obvious downsides.) Does anyone know why they never took off?

          • By johannes1234321 2025-05-2321:20

            > Does anyone know why they never took off?

            Browser vendors at some point claimed it confused users and removed the highlight (I think the same browser vendors who try to remove the "confusing" URL bar ...)

            Aside from that EV certificates are slow to issue and phishers got similar enough EV certs making the whole thing moot.

          • By amiga386 2025-05-2414:37

            Because they actively thwarted security.

            https://arstechnica.com/information-technology/2017/12/nope-...

            https://web.archive.org/web/20191220215533/https://stripe.ia...

            > this site uses an EV certificate for "Stripe, Inc", that was legitimately issued by Comodo. However, when you hear "Stripe, Inc", you are probably thinking of the payment processor incorporated in Delaware. Here, though, you are talking to the "Stripe, Inc" incorporated in Kentucky.

            There's a lot of validation that's difficult to get around in DV (Domain Validation) and in DNS generally. Unless you go to every legal jurisdiction in the world and open businesses and file and have granted trademarks, you _cannot_ guarantee that no other person will have the same visible EV identity as you.

            It's up to visitors to know that apple.com is where you buy Apple stuff, while apple.net, applecart.com, 4ppl3.com, аррlе.сом, ., example.com/https://apple.com are not. But if they can manage that, they can trust apple.com more than they could any URL with an "Apple, Inc." EV certificate. When browsers show the URL bar tend to highlight the top-level domain prominently, and they reject DNS names with mixed script, to avoid scammers fooling you. It's working better than EV.

          • By tialaramex 2025-05-2323:562 reply

            EV can't actually work. It was always about branding for the for-profit CAs so that they have a premium product which helps the line go up. Let me give you a brief history - you did ask.

            In about 2005, the browser vendors and the Certificate Authorities began meeting to see if they could reach some agreement as neither had what they wanted and both might benefit from changes. This is the creation of the CA/Browser Forum aka CA/B Forum which still exists today.

            From the dawn of SSL the CAs had been on a race to the bottom on quality and price.

            Initially maybe somebody from a huge global audit firm that owns a CA turn up on a plane and talks to your VP of New Technology about this exciting new "World Wide Web" product, maybe somebody signs a $1M deal over ten years, and they issue a certificate for "Huge Corporation, Inc" with all the HQ address details, etc. and oh yeah, "www.hugecorp.example" should be on there because of that whole web thing, whatever that's about. Nerd stuff!

            By 2005 your web designer clicks a web page owned by some bozo in a country you've never heard of, types in the company credit card details, company gets charged $15 because it's "on sale" for a 3 year cert for www.mycompany.example and mail.mycompany.com is thrown in for free, so that's nice. Is it secure? Maybe? I dunno, I think it checked my email address? Whatever. The "real world address" field in this certificate now says "Not verified / Not verified / None" which is weird, but maybe that's normal?

            The CAs can see that if this keeps up in another decade they'll be charging $1 each for 10 year certificates, they need a better product and the browser vendors can make that happen.

            On the other hand the browser vendors have noticed that whereas auditors arriving by aeroplane was a bit much, "Our software checked their email address matched in the From line" is kinda crap as an "assurance" of "identity".

            So, the CA/B Baseline Requirements aka BRs are one result. Every CA agreed they'd do at least what the "baseline" required and in practice that's basically all they do because it'd cost extra to do more so why bother. The BRs started out pretty modest - but it's amazing what you find people were doing when you begin writing down the basics of what obviously they shouldn't do.

            For example, how about "No issuing certificates for names which don't exist" ? Sounds easy enough right? When "something.example" first comes into existence there shouldn't already be certificates for "something.example" because it didn't exist... right? Oops, lots of CAs had been issuing those certificates, reasoning that it's probably fine and hey, free money.

            Gradually the BRs got stricter, improving the quality of this baseline product in both terms of the technology and the business processes, this has been an enormous boon, because it's an agreement for the industry this ratchets things for everybody, so there's no race to the bottom on quality because your competitors aren't allowed to do worse than the baseline. On price, the same can't be said, zero cost certificates are what Let's Encrypt is most famous for after all.

            The other side of the deal is what the CAs wanted, they wanted UI for their new premium product. That's EV. Unlike many of the baseline requirements, this is very product focused (although to avoid being an illegal cartel it is forbidden for CA/B Forum to discuss products, pricing etc.) and so it doesn't make much technical sense.

            The EV documents basically say you get all of the Baseline, plus we're going to check the name of your business - here's how we'll check - and then the web browser is going to display that name. It's implied that these extra checks cost more money (they do, and so this product is much more expensive). So improvements to that baseline do help still, but they also help everybody who didn't buy the premium EV product.

            Now, why doesn't this work in practice? The DNS name or IP address in an "ordinary" certificate can be compared to reality automatically by the web browser. This site says it is news.ycombinator.com, it has a certificate for news.ycombinator.com, that's the same OK. Your browser performs this check, automatically and seamlessly, for every single HTTP transaction. Here on HN that's per page load, but on many sites you're doing transactions as you click UI or scroll the page, each is checked.

            With EV the checks must be done by a human, is this site really "Bob's Burgers" ? Actually wait, is it really "Bob's Burgers of Ohio, US" ? Worse, probably although you know them as Bob's Burgers, legally, as you'd see on their papers of incorporation they are "Smith Restaurant Holdings Inc." and they're registered in Delaware because of course they are.

            So now you're staring at the formal company name of a busines and trying to guess whether that's legitimate or a scam. But remember you can't just do this check once, scammers might find a way to redirect some traffic and so you need to check every individual transaction like your web browser does. Of course it's a tireless machine and you are not.

            So in practice this isn't effective.

            • By immibis 2025-05-2411:401 reply

              Still sounds better than nothing. And gives companies an incentive to register under their actual names.

              • By tialaramex 2025-05-2412:361 reply

                I'm not convinced on either, the mindless automation is always effective so you just don't need to think about it, whereas for EV you need to intimately understand exactly which transactions you verified and what that means - the login HTML was authentic but you didn't check the Javascript? The entire login page was checked but HTTP POST of your password was not? The redirect to payment.mybank.example wasn't checked? Only the images were checked?

                Imagine explaining to my mother how to properly check this, then imagine explaining why the check she just made is wrong now because the bank changed how their login procedure works.

                We could have attempted something with better security, although nowhere close to fool proof, but the CAs were focused on a profitable product not on improving security, and I do not expect anyone to have another bite of that cherry.

                As to the incentive to register, this is a cart v horse problem. Most businesses do not begin with a single unwavering vision of their eventual product and branding, they iterate, and that means the famous branding will need an expensive corporate change just to make the EV line up properly, that's just not going to happen much of the time, so people get used to seeing the "wrong" name and once that happens this is worthless.

                Meanwhile crooks can spend a few bucks to register a similar-sounding name and registration authorities don't care, while the machine sees at a glance the differences between bobs-burgers.example and robs-burgers.example and bobsburgers.example, the analogous business registrations look similar enough that humans would click right past.

                • By immibis 2025-05-2620:26

                  Why would you have to verify every transaction? If the page is from Big Bank Corp Of America, they're responsible for the whole page. Including any Javascript viruses they ill-advisedly include on their page.

            • By ipdashc 2025-05-2417:32

              This was a fun read. Thanks for the explanation!

          • By bandrami 2025-05-244:062 reply

            Phishers also got EV certs.

            The big problem with PKI is that there are known bad (or at least sketchy) actors on the big CA lists that realistically can't be taken off that list.

            • By solatic 2025-05-245:19

              How big of a problem is it really, with CAA records and FIDO2 or passkeys?

              CAA makes sure only one CA signs the cert for the real domain. FIDO2 prevents phising on a similar-looking domain. EV would force a phisher to get a similar-looking corporate name, but it's beside the main FIDO2 protection.

            • By akerl_ 2025-05-244:301 reply

              What's an example?

              We're in an era where browsers have forced certificate transparency and removed major vendor CAs when they've issued certificates in violation of the browsers' requirements.

              The concern about bad/sketchy CAs in the list feels dated.

              • By bandrami 2025-05-2410:30

                Look at the list of state actors in your certificates bundle, for a start

      • By throw0101b 2025-05-2322:36

        > Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.

        There are a number of shell-based ACME clients whose prerequisites are: OpenSSL and cURL. You're probably already relying on OpenSSL and cURL for a bunch of things already.

        If you can read shell code you can step through the logic and understand what they're doing. Some of them (e.g., acme.sh) often run as a service user (e.g., default install from FreeBSD ports) so the code runs unprivileged: just add a sudo (or doas) config to allow it to restart Apache/nginx.

      • By spockz 2025-05-2311:512 reply

        Given that keys probably need to be shared between multiple gateway/ingresses, how common is it to just use some HSM or another mechanism of exchanging the keys with all the instances? The acme client doesn’t have to run on the servers itself.

        • By tialaramex 2025-05-2312:29

          > The acme client doesn’t have to run on the servers itself.

          This is really important to understand if you care about either: Actually engineering security at some scale or knowing what's actually going on in order to model it properly in your head.

          If you just want to make a web site so you can put up a blog about your new kitten, any of the tools is fine, you don't care, click click click, done.

          For somebody like Rachel or many HN readers, knowing enough of the technology to understand that the ACME client needn't run on your web servers is crucial. It also means you know that when some particular client you're evaluating needs to run on the web server that it's a limitation of that client not of the protocol - birds can't all fly, but flying is totally one of the options for birds, we should try an eagle not an emu if we want flying.

        • By immibis 2025-05-2411:41

          You could if your domain was that valuable. Most aren't.

      • By hannob 2025-05-2311:585 reply

        > Some people don't want to be forced to run a bunch of stuff they > don't understand on the server, and I agree with them.

        Honest question:

        * Do you understand OS syscalls in detail?

        * Do you understand how your BIOS initializes your hardware?

        * Do you understand how modern filesystems work?

        * Do you understand the finer details of HTTP or TCP?

        Because... I don't. But I know enough about them that I'm quite convinced each of them is a lot more difficult to understand than ACME. And all of them and a lot more stuff are required if you want to run a web server.

        • By sussmannbaka 2025-05-2312:08

          This point is so tired. I don’t understand how a thought forms in my neurons, eventually matures into a decision and how the wires in my head translate this into electrical pulses to my finger muscles to type this post so I guess I can’t have opinions about complexity.

        • By snowwrestler 2025-05-2318:45

          I get where you’re going with this, but in this particular case it might not be relevant because there’s a decent chance that Rachel By The Bay does actually understand all those things.

        • By frogsRnice 2025-05-2312:06

          Sure - but people are still free to decide where they draw the line.

          Each extra bit of software is an additional attack surface after all

        • By fc417fc802 2025-05-2313:55

          An OS is (at least generally) a prerequisite. If minimalism is your goal then you'd want to eliminate tangentially related things that aren't part of the underlying requirements.

          If you're a fan of left-pad I won't judge but don't expect me to partake without bitter complaints.

        • By kjs3 2025-05-2313:01

          I hear some variation of this line of 'reasoning' about once a week, and it's always followed by some variation of "...and that's why we shouldn't have to do all this security stuff you want us to do".

HackerNews