OxCaml - a set of extensions to the OCaml programming language.

2025-06-1314:20298112oxcaml.org

It is both Jane Street’s production compiler, as well as a laboratory for experiments focused towards making OCaml better for performance-oriented programming. Our hope is that these extensions can…

It is both Jane Street’s production compiler, as well as a laboratory for experiments focused towards making OCaml better for performance-oriented programming. Our hope is that these extensions can over time be contributed to upstream OCaml.

  • to provide safe, convenient, predictable control over performance-critical aspects of program behavior
  • but only where you need it,
  • and…in OCaml!

OxCaml’s extensions are meant to make OCaml a great language for performance engineering. Performance engineering requires control, and we want that control to be:

  • Safe. Safety is a critical feature for making programmers more productive, and for shipping correct code. Languages that are pervasively unsafe are too hard to use correctly.
  • Convenient. We want to provide control without bewildering programmers, or drowning them in endless annotations. To achieve this, we aim to maintain OCaml’s excellent type-inference, even as we add considerable expressiveness to the type-system.
  • Predictable. One of the great features of OCaml today is that it’s pretty easy to look at OCaml code and understand how it’s going to perform. We want our extensions to maintain and improve on that property, by making key performance details explicit at the type-level.

By "only where you need it", we mean that OxCaml’s extensions should be pay-as-you-go. While OxCaml aims to provide more power to optimize, you shouldn’t need to swallow extra complexity when you’re not using that power.

By "in OCaml", we mean that all valid OCaml programs are also valid OxCaml programs. But our more profound goal is for OxCaml to feel like OCaml evolving into a better version of itself, rather than a new language. For that, OxCaml needs to honor OCaml’s basic design sensibility, and to preserve the safety, ease, and productivity that are hallmarks of the language.

Our extensions can be roughly organized into a few areas:

Writing correct concurrent programs is notoriously difficult. OxCaml includes additions to the type system to statically rule out data races.

OxCaml lets programmers specify the way their data is laid out in memory. It also provides native access to SIMD processor extensions.

OxCaml gives programmers tools to control allocations, reducing GC pressure and making programs more cache efficient and deterministic.

OxCaml also contains some extensions that aren’t specifically about systems programming, but which we’ve found helpful in our day-to-day work:

  • Polymorphic parameters
  • Include functor
  • Labeled tuples
  • Immutable arrays

OxCaml is open-source, and we’re excited to welcome experimental users, especially researchers and tinkerers who can kick the tires and provide feedback on the system. We put the emphasis on experimental because OxCaml makes no promises of stability or backwards compatibility for its extensions (though it does remain backwards compatible with OCaml).

OxCaml is intended to be easy to use, and to that end comes with modified versions of the standard OCaml tool-set, including:

  • Package management, compatible with dune and opam
  • Editor integration via the LSP-server
  • Source code formatting
  • Documentation generation

Jane Street has long open sourced a bunch of useful libraries and tools. These are now released in two forms: one for upstream OCaml, in which our extensions have been stripped, and one for OxCaml, where the extensions are fully leveraged.

Not all extensions are erasable, so some libraries will be available only for OxCaml. We’ll export OCaml-compatible versions of these libraries when the necessary extensions are integrated upstream.


Read the original article

Comments

  • By Lyngbakr 2025-06-1314:477 reply

    The Janet Street folks, who created this, also did an interesting episode[0] of their podcast where they discuss performance considerations when working with OCaml. What I was curious about was applying a GC language to a use case that must have extremely low latency. It seems like an important consideration, as a GC pause in the middle of high-frequency trading could be problematic.

    [0] https://signalsandthreads.com/performance-engineering-on-har...

    • By mustache_kimono 2025-06-1419:09

      I actually asked Ron Minsky about exactly this question on Twitter[0]:

          Me: [W]hy not just use Rust for latency sensitive apps/where it may make sense?  Is JS using any Rust?
      
          Minsky: Rust is great, but we get a lot of value out of having the bulk of our code in a single language. We can share types, tools, libraries, idioms, and it makes it easier for folk to move from project to project.
      
          And we're well on our way to getting the most important advantages that Rust brings to the table in OCaml in a cleanly integrated, pay as you go way, which seems to us like a better outcome.
      
          There are also some things that we specifically don't love about Rust: the compile times are long, folk who know more about it than I do are pretty sad about how async/await works, the type discipline is quite complicated, etc.
      
          But mostly, it's about wanting to have one wider-spectrum language at our disposal.
      
      [0]: https://x.com/arr_ohh_bee/status/1672224986638032897

    • By pjmlp 2025-06-146:001 reply

      The problem is not a GC language per se, people keep putting all GC languages in the same basket.

      The real issue is being a GC language, without support for explicit manipulation of stack and value types.

      Want a GC language, with productivity of GC languages, with the knobs to do low level systems coding?

      Cedar, Oberon language family, Modula-3, D, Nim, Eiffel, C#, F#, Swift, Go.

      • By jaennaet 2025-06-147:241 reply

        Does Go have much in the way of GC knobs? It didn't some years ago, but I haven't kept up on latest developments

        • By pjmlp 2025-06-148:021 reply

          The knobs aren't on the GC necessarily, rather language features.

          With a Go compiler toolchain you have stack and global memory static allocation, use of compiler flags to track down when references escape, manually allocate via OS bindings, there is the unsafe package, and use slices with it, an assembler is part of the toolchain learn to use it, and regardless of CGO is not Go memes, it is another tool to reach for if Assembly isn't your thing.

          • By jaennaet 2025-06-148:081 reply

            Ohh right yes, now I get what you mean. My brain just immediately went for "GC knobs" when you mentioned "knobs", but in my defense I'm running a 40°C fever so I should probably not be commenting at all

            • By pjmlp 2025-06-148:40

              All the best and get well.

    • By AdieuToLogic 2025-06-142:471 reply

      > What I was curious about was applying a GC language to a use case that must have extremely low latency. It seems like an important consideration, as a GC pause in the middle of high-frequency trading could be problematic.

      Regarding a run-time environment using garbage collection in general, not OCaml specifically, GC pauses can be minimized with parallel collection algorithms such as found in the JVM[0]. They do not provide hard guarantees however, so over-provisioning system RAM may also be needed in order to achieve required system performance.

      Another more complex approach is to over-provision the servers such that each can drop out of the available pool for a short time, thus allowing "offline GC." This involves collaboration between request routers and other servers, so may not be worth the effort if a deployment can financially support over-provisioning servers such that there is always an idle CPU available for parallel GC on each.

      0 - https://docs.oracle.com/en/java/javase/17/gctuning/parallel-...

      • By pjmlp 2025-06-146:02

        Java is like C and C++, there isn't the one implementation.

        So if you want hard guarantees, you reach out to real time JVM implementations like the commercial ones from PTC and Aicas.

    • By rauljara 2025-06-1316:072 reply

      GC compactions were indeed a problem for a number of systems. The trading systems in general had a policy of not allocating after startup. JS has a library, called "Zero" that provides a host of non-allocating ways of doing things.

      • By jitl 2025-06-1316:511 reply

        Couldn’t find this after 6 seconds of googling, link?

        • By jallmann 2025-06-1316:571 reply

          The linked podcast episode mentions it.

          • By notnullorvoid 2025-06-1318:372 reply

            There's no mention of a library called zero, or even JavaScript.

            • By garbthetill 2025-06-1319:001 reply

              Im assuming the JS refers to Janes street

              • By notnullorvoid 2025-06-1321:191 reply

                That makes sense, I guess I've got web tunnel vision.

                • By sheepscreek 2025-06-141:411 reply

                  I was bit by the same spider that gave you web tunnel vision. In any case, I find OCaml too esoteric for my taste. F# is softer and feels more..modern perhaps? But I don’t think GC can be avoided in dotnet.

                  • By debugnik 2025-06-146:57

                    You can avoid GC in hot loops in F# with value-types, explicit inlining, and mutability.

                    Mutability may not result in very idiomatic code however, although it can often be wrapped with a functional API (e.g. parser combinators).

            • By jallmann 2025-06-1319:49

              > This is what I like to call a dialect of OCaml. We speak in sometimes and sometimes we gently say it’s zero alloc OCaml. And the most notable thing about it, it tries to avoid touching the garbage collector ...

    • By enricozb 2025-06-1315:542 reply

      Haven't looked at the link, but I think for a scenario like trading where there are market open and close times, you can just disable the GC, and restart the program after market close.

    • By great_wubwub 2025-06-1319:581 reply

      *Jane Street

      • By esafak 2025-06-140:07

        It's a great name for a competitor :)

    • By mardifoufs 2025-06-1316:342 reply

      You just let the garbage accumulate and collect it whenever markets are closed. In most cases, whenever you need ultra low latency in trading, you usually have very well defined time constraints (market open/close).

      Maybe it's different for markets that are always open (crypto?) but most HFT happens during regular market hours.

      • By dmkolobov 2025-06-1316:454 reply

        Is that really a viable solution for a timeframe of 6+ hours?

        • By jitl 2025-06-1316:522 reply

          Sure, if you know how much you allocate per minute (and don’t exceed your budget) you just buy enough RAM and it’s fine.

          • By ackfoobar 2025-06-1317:491 reply

            This will decrease performance because of reduced locality. Maybe increased jitter because of TLB misses.

            • By jitl 2025-06-141:311 reply

              Compared to what, running a garbage collector?

              • By dmkolobov 2025-06-146:04

                Probably? Locality becomes fairly important at scale. That’s why there’s a strong preference for array-based data structures in high-performance code.

                If I was them I’d be using OCaml to build up functional “kernels” which could be run in a way that requires zero allocation. Then you dispatch requests to these kernels and let the fast modern generational GC clean up the minor cost of dispatching: most of the work happens in the zero-allocation kernels.

          • By mayoff 2025-06-1317:221 reply

            (this comment was off topic, sorry)

        • By spooneybarger 2025-06-1318:031 reply

          Yes. It is a very common design pattern within banks for systems that only need to run during market hours.

          • By iainctduncan 2025-06-1318:54

            I talk about doing this in an audio context and get met with raised eyebrows, I'd love some references on others doing it, if anyone can share!

        • By mardifoufs 2025-06-1323:57

          I think it is, but to be clear I think (from my very limited experience, just a couple of years before leaving finance, and the people with more experience that I've talked with) that c++ is still a lot more common than any GC language (typically java, since OCaml is even rarer). So it is possible, and some firms seem to take that approach, but I'm not sure exactly how besides turning off GC or very specific GC tuning.

          Here is a JVM project I saw a few years back, I'm not sure how successful the creators are but they seem to use it in actual production. It's super rare to get even a glimpse at HFT infra from the outside so it's still useful.

          https://github.com/OpenHFT

        • By logicchains 2025-06-1316:52

          You can just add more RAM until it is viable.

      • By amw-zero 2025-06-142:27

        Are you aware of how many allocations the average program executes in the span of a couple of minutes? Where do you propose all of that memory lives in a way that doesn’t prevent the application from running?

  • By legobmw99 2025-06-1314:374 reply

    The first feature that originated in this fork to be upstreamed is labeled tuples, which will be in OCaml 5.4:

    https://github.com/ocaml/ocaml/pull/13498

    https://discuss.ocaml.org/t/first-alpha-release-of-ocaml-5-4...

    • By aseipp 2025-06-1315:58

      Yeah, pretty excited about this one even though it seems minor. A paper and talk by the author of this particular feature from ML2024, too:

      - https://www.youtube.com/watch?v=WM7ZVne8eQE

      - https://tyconmismatch.com/papers/ml2024_labeled_tuples.pdf

    • By munchler 2025-06-1316:473 reply

      > Because sum:int * product:int is a different type from product:int * sum:int, the use of a labeled tuple in this example prevents us from accidentally returning the pair in the wrong order, or mixing up the order of the initial values.

      Hmm, I think I like F#'s anonymous records better than this. For example, {| product = 6; sum = 5 |}. The order of the fields doesn't matter, since the value is not a tuple.

      • By thedufer 2025-06-1414:54

        Labeled tuples are effectively order-independent. Your implementation's order has to match your interface's order, but callers can destruct the labeled tuples in any order and the compiler will do the necessary reordering (just like it does for destructing records, or calling functions with labeled arguments). I don't think this is materially different from what you're describing in F#, except that labeled tuples don't allow labeling a single value (that is, there's no 1-tuple, which is also the case for normal tuples).

      • By reycharles 2025-06-144:20

        One reason why they're not the same is because the memory representation is different (sort of). This will break FFIs if you allow reordering the tuple arbitrarily.

      • By rwmj 2025-06-1317:181 reply

        Isn't that just the same as the ordinary OCaml { product = 6; sum = 5 } (with a very slightly different syntax)?

        • By munchler 2025-06-1317:241 reply

          The difference between { … } and {| … |} is that the latter’s type is anonymous, so it doesn’t have to be declared ahead of time.

          • By rwmj 2025-06-1318:12

            Oh I see, good point. I'm wondering how this is represented internally. Fields alphabetically? I've also desired extensible anonymous structs (with defaults) from time to time, but implementing that would involve some kind of global analysis I suppose.

    • By debugnik 2025-06-1314:56

      Immutable arrays were ported from this fork as well, and merged for 5.4; although with different syntax I think.

    • By andrepd 2025-06-1315:044 reply

      Anonymous labeled structs and enums are some of my top wished-for features in programming languages! For instance, in Rust you can define labelled and unlabelled (i.e. tuple) structs

          struct Foo(i32, i32);
          struct Bar{sum: i32, product: i32}
      
      But you can only e.g. return from functions an anonymous tuple, not an anonymous labelled struct

          fn can() -> (i32, i32)
          fn cant() -> {sum: i32, product: i32}

      • By munificent 2025-06-1317:191 reply

        In Dart, we merged tuples and records into a single construct. A record can have positional fields, named fields, or both. A record type can appear anywhere a type annotation is allowed. So in Dart these are both fine:

            (int, int) can() => (1, 2);
            ({int sum, int product}) alsoCan() => (sum: 1, product: 2);
            (int, {int remainder}) evenThis() => (1, remainder: 2);
        
        The curly braces in the record type annotation distinguish the named fields from the positional ones. I don't love the syntax, but it's consistent with function parameter lists where the curly braces delimit the named parameters.

        https://dart.dev/language/records

        • By afiori 2025-06-1318:581 reply

          How do you distinguish a tuple with both positional and named fields from a tuple that has a record as a field

          Like how do you write the type of (1, {sum:2}) ? Is it different from (1 , sum :2)?

          • By munificent 2025-06-1321:021 reply

            The syntax is a little funny for mostly historical reasons. The curly braces are only part of the record type syntax. There's no ambiguity there because curly braces aren't used for anything else in type annotations (well, except for named parameters inside a function type's parameter list, but that's a different part of the grammar).

      • By tialaramex 2025-06-1315:261 reply

        Hmm. Let me first check that I've understood what you care about

            struct First(this: i8, that: i64)
            struct Second(this: i8, that: i8)
            struct Third(that: i64, this: i8)
            struct Fourth(this: i8, that: i64)
            struct Fifth(some: i8, other: i64)
        
        You want First and Fourth as the same type, but Second and Third are different - how about Fifth?

        I see that this is different from Rust's existing product types, in which First and Fourth are always different types.

        Second though, can you give me some examples where I'd want this? I can't say I have ever wished I had this, but that might be a different experience.

        • By cAtte_ 2025-06-1315:581 reply

          they're not asking for a structural typing overhaul, just a way to make ad-hoc anonymous types with named fields and pass them around. a lot of times with tuple return types you're left wondering what that random `usize` is supposed to represent, so having names for it would be very convenient. i don't see why, under the hood, it couldn't just be implemented the exact same way as current tuple return types

          • By zozbot234 2025-06-1316:201 reply

            > they're not asking for a structural typing overhaul, just a way to make ad-hoc anonymous types with named fields and pass them around.

            And their point is that the two boil down to the same thing, especially in a non-trivial program. If switching field positions around changes their semantics, tuples may well the most sensible choice. As for "what that random usize is supposed to represent" that's something that can be addressed with in-code documentation, which Rust has great support for.

            • By tialaramex 2025-06-1317:12

              Also, if it's not just a "random usize" then you should use the new type paradigm. In a language like Rust that's not quite as smooth as it could possibly be, but it's transparent to the machine code. Rust's Option<OwnedFd> is the same machine code as C's int file descriptor, but the same ergonomics as a fancy Haskell type. We can't accidentally confuse "None, there isn't a file descriptor" for an actual file descriptor as we so easily could in C, nor can we mistakenly do arithmetic with file descriptors - which is nonsense but would work (hilarity ensues) in C.

              If these aren't "random" usizes but FileSizes or ColumnNumbers or SocketTimeouts then say so and the confusion is eliminated.

      • By int_19h 2025-06-1319:52

        It's interesting that languages which start with purely nominal structs tend to acquire some form of structurally typed records in the long run. E.g. C# has always had (nominally typed) structs, then .NET added (structurally typed) tuples, and then eventually the language added (still structurally typed) tuples with named items on top of that.

      • By munk-a 2025-06-1320:26

        PHP has it all!

        I think the main dividing line here is whether you want to lean into strict typing or whether you prefer a more loose typing structure. The extremes of both (where, for instance, the length of an array is part of its type definition or there are not contractual guarantees about data) are both awful. I think the level of type strictness you desire as a product is probably best dictated by team and project size (which you'll note changes over the lifetime of the product) with a lack of typing making it much easier to prototype early code while extremely strict typing can serve as a strong code contract in a large codebase where no one person can still comprehend the entirety of it.

        It's a constant push and pull of conflicting motivations.

  • By debugnik 2025-06-1315:301 reply

    I wasn't aware that this fork supported SIMD! Between this, unboxed types and the local mode with explicit stack allocation, OxCaml almost entirely replaces my past interest in F#; this could actually become usable for gamedev and similar consumer scenarios if it also supported Windows.

    • By TheNumbat 2025-06-1315:443 reply

      Yeah, this would be great! Currently only 128-bit SSE/NEON is working but AVX is coming very soon. There's also nothing blocking Windows, but it will require some work. (I added the SIMD support in OxCaml)

      • By aguluman 2025-06-1421:43

        This is so cool.

      • By aseipp 2025-06-1316:001 reply

        FWIW, the "Get OxCaml" page actually says that SIMD on ARM isn't supported yet. If it actually works it would be worth removing that from the known issues list https://oxcaml.org/get-oxcaml/

      • By debugnik 2025-06-1316:081 reply

        Cool to hear there aren't any technical blockers to add Windows support! You just convinced me into giving OxCaml a try for a hobby project. 128-bit SSE is likely to be enough for my use case and target specs.

        • By avsm 2025-06-1317:43

          David Allsopp had an oxcaml branch compiling on Windows a few months ago, so it’s in the queue…

HackerNews