...

FlyingLawnmower

465

Karma

2014-02-15

Created

Recent Activity

  • Good point re: documentation...

    We have support for Huggingface Transformers, llama.cpp, vLLM, SGLang, and TensorRT-LLM, along with some smaller providers (e.g. mistral.rs). Using any of these libraries as an inference host means you can use an OSS model with the guidance backend for full support. Most open source models will run on at least one of these backends (with vLLM probably being the most popular hosted solution, and transformers/llama.cpp being the most popular local model solutions)

    We're also the backend used by OpenAI/Azure OpenAI for structured outputs on the closed source model side.

  • guidance can handle many context-free grammars. We use an Earley parser under the hood (https://en.wikipedia.org/wiki/Earley_parser) which gives us significant flexibility boosts over alternative approaches that use weaker parsers (and went through lots of effort to make Earley parsing fast enough to not slow down LM inference). However, XML is not perfectly context-free, though with some basic assumptions you can make it CF.

    The annoying bit with grammars is that they are unfortunately a bit complex to write properly. Fortunately language models are getting better at this, so hopefully to get an XML grammar, you can get most of the way there with just a GPT-5 prompt. Suppose it would be a good idea to have a better pre-built set of popular grammars (like a modified XML) in guidance so that we cut this headache out for users...!

  • We did quite a thorough benchmarking of various structured decoding providers in one of our papers: https://arxiv.org/abs/2501.10868v3 , measuring structured outputs providers on performance, constraint flexibility, downstream task accuracy, etc.

    Happy to chat more about the benchmark. Note that these are a bit out of date though, I'm sure many of the providers we tested have made improvements (and some have switched to wholesale using llguidance as a backend)

  • Thanks :)

    Great question re: adoption...it's definitely dominated by JSON. Most API providers have standardized on JSON outputs, so application teams have started building shims that map other formats to JSON and back. Similarly, with models heavily being post-trained to generate "good" JSON, I think there's a better model-constraint alignment story with JSON than most arbitrary grammars.

    That said, internally, we experiment quite a lot with custom grammars all across the stack. It's more complicated to write a grammar than a JSON schema (though LMs are very good at grammar writing now) and more error prone to debug, but it can help significantly in certain cases (e.g. having models write custom DSLs not commonly found on the internet, at various parts of a model training pipeline, etc. etc.). I'm hoping that with the right tooling around it, the broader community will start nudging beyond JSON.

    To that end, the python guidance library is really an attempt to make writing grammars more friendly to a python programmer. More to be done here of course!

  • If you can screen tokens against your grammar fast enough, you can build a bitmask over the entire token vocabulary and apply it right before sampling. As vocabulary sizes grow, this gets more complex to do in real time, but we (and other libraries) have found several optimizations to do this extremely quickly (eg for guidance, we detail some optimizations here https://github.com/guidance-ai/llguidance/blob/main/docs/opt...).

    Other libraries work by essentially pre-computing all the masks for all possible generations, but of course you're restricted to working with simple grammars in this case (like a subset of regular expressions)

HackerNews