Show HN: Llama 3.1 70B on a single RTX 3090 via NVMe-to-GPU bypassing the CPU

2026-02-2120:57395101github.com

High-efficiency LLM inference engine in C++/CUDA. Run Llama 70B on RTX 3090. - xaskasdf/ntransformer

You can’t perform that action at this time.


Read the original article

Comments

  • By 01100011 2026-02-222:354 reply

    Yeah, GPUdirect should allow you to dma straight to a storage device.

    I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.

    • By javchz 2026-02-223:143 reply

      Yeah a ramdisk would probably work wonders. It's a shame Intel optane didn't became a standard, those type of workflows would be amazing for it.

      • By xaskasdf 2026-02-2215:071 reply

        Ya know, here on the local market there are a bunch of optanes hanging around, I'll try to manage one to check if there's any improvement

        • By jonassm 2026-02-2218:081 reply

          Optanes will be good for latency, but not so much for BW which seems to be your major bottleneck if I'm not mistaken?

          • By xaskasdf 2026-02-2220:06

            yeah, the mobo upgrade is something I gotta do anyway, so I'll cover that up more or less, the optane is something I didn't thought about

      • By TechSquidTV 2026-02-223:54

        Ahhh damn it. Intel! Come back!

    • By lmeyerov 2026-02-227:39

      This is exactly what I was wondering

      I gave a talk a few years ago at dask summit (conf?) on making the stars align with dask-cudf here. We were helping a customer accelerate log analytics by proving out our stack for nodes that look roughly like: parallel ssd storage arrays (30 x 3 GB/s?) -> GPUDirect Storage -> 4 x 30 GB/s PCIe (?) -> 8 x A100 GPUs, something like that. It'd be cool to see the same thing now in the LLM world, such as a multi-GPU MoE, or even a single-GPU one for that matter!

    • By ElectricalUnion 2026-02-225:12

      Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.

    • By bhewes 2026-02-2217:33

      The marvel cxl 2.0 ddr4 card Serve the Home used for kvcache speed ups. And I am personally looking forward to cxl 3 and memory coherence across my system builds.

      https://www.servethehome.com/hyper-scalers-are-using-cxl-to-...

  • By randomtoast 2026-02-2122:574 reply

    0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff

    • By xaskasdf 2026-02-221:114 reply

      yeah, actually I wanted to see if this was possible at all. I managed to get around 3000 tokens/s on a ps2 with classic transformers, since the emotion engine is capable of 32 bit addresses, but it has like 32gb of ram. So I ran into the question of why was that fast and I couldn't get that speed even with small models, and the deal is that the instructions went right of the memory to the gpu and that's the main difference that does when a regular computer does inference: it has to request the instructions to the cpu every time. As I mentioned too, on professional cards you can avoid these problems naturally, since they got instructions precisely for this, but sadly I don't have 30k bucks to spare on a gpu :(

      • By derstander 2026-02-221:32

        *32MB of RAM (plus 4MB of video RAM and a little sound and IOP memory).

      • By SmithRenaldo 2026-02-240:07

        The $5/hr B200 rate is fine for training, but cloud latency usually breaks real-time signal processing. I’ve been hitting similar walls with MemeRadar; when you're processing high-volume spikes, the bottleneck is memory bandwidth, not raw TFLOPS. Quantizing to fit L3 cache is an option, but you lose the precision needed for spotting subtle rug-pull patterns. For 24/7 production workloads, local hardware TCO usually beats cloud rentals.

      • By eleventyseven 2026-02-225:462 reply

        > I don't have 30k bucks to spare on a gpu :(

        Do you have $2/hr to rent an RTX 6000 96GB or $5/hr for B200 180GB on the cloud?

        • By superkuh 2026-02-225:47

          I'd rather not give money to scalper barons if I can avoid it. Fab capacity is going to that for rental rather than hardware for humans.

        • By xaskasdf 2026-02-2215:002 reply

          I thought about that, but idk if they allow me to modify the linux kernel and nvidia cuda kernel at all

          • By jonassm 2026-02-2217:371 reply

            In those systems you could probably leverage something like Nvidia SCADA or GDS directly.

            • By xaskasdf 2026-02-2220:07

              Actually since they have direct GDS it should perform really well on professional gpus

          • By green-salt 2026-02-2217:23

            I think you can do a bunch of that on Digitalocean's GPU droplets.

      • By anoncow 2026-02-224:141 reply

        3000 tokens per sec on 32 mb Ram?

        • By fc417fc802 2026-02-224:561 reply

          fast != practical

          You can get lots of tokens per second on the CPU if the entire network fits in L1 cache. Unfortunately the sub 64 kiB model segment isn't looking so hot.

          But actually ... 3000? Did GP misplace one or two zeros there?

          • By xaskasdf 2026-02-2215:48

            I wondered the same, but the rendering seems right, the output was almost instant. I'll recheck the token counter; anyway as you say, fast isn't practical. Actually I had to develop my own tiny model https://huggingface.co/xaskasdf/brandon-tiny-10m-instruct to fit something "usable", and it's basically a liar or disinformation machine haha

    • By Wuzado 2026-02-2123:57

      I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality.

    • By tyfon 2026-02-2123:301 reply

      I didn't really understand the performance table until I saw the top ones were 8B models.

      But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.

      I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?

      • By tgrowazay 2026-02-2123:595 reply

        LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

        DDR4 tops out about 27Gbs

        DDR5 can do around 40Gbs

        So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.

        • By someguy2026 2026-02-220:171 reply

          DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.

          • By tgrowazay 2026-02-2320:00

            Yes.

            In general systems usually have PCIE version with bandwidth better than RAM of that system.

            For example a system with DDR4 (27Gbs) usually has at least PCIE4 (32Gbs at 16x).

            But you can bottleneck that by building a DDR5 (40Gbs) system with PCIE4 card.

        • By xaskasdf 2026-02-221:28

          yeah, actually, I'm bottlenecked af since my mobo got pcie3 only :(

        • By uf00lme 2026-02-220:471 reply

          Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time.

          • By wtallis 2026-02-222:03

            Four channels of DDR4-3200 vs two channels of DDR5-6400 (four subchannels) should come out pretty close. I don't see any reason why the DDR4 configuration would be consistently faster; you might have more bank groups on DDR4, but I'm not sure that would outweigh other factors like the topology and bandwidth of the interconnects between the memory controller and the CPU cores.

        • By vlovich123 2026-02-220:13

          Faster than the 0.2tok/s this approach manages

        • By zozbot234 2026-02-220:281 reply

          Should be active param size, not model size.

          • By tgrowazay 2026-02-2319:55

            Yes, you’re right.

            LLama 3.1 however is not MoE, so all params are active.

            For MoE it is tricky, because for each token you only use a subset of params (an “expert”) but you don’t know which one, so you have to keep them all in memory or wait until it loads from slower storage, potentially different for each token.

    • By fluoridation 2026-02-223:56

      That's slower than just running it off CPU+GPU. I can easily hit 1.5 tokens/s on a 7950X+3090 and a 20480-token context.

  • By rl3 2026-02-2123:421 reply

    Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.

    One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.

    I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.

    Curious if anyone's tried this already.

    • By xaskasdf 2026-02-221:14

      That would be nice to see. Actually I was thinking about getting another 3090 and a mobo upgrade since I'm bottlenecked by pcie3 to tryna run glm 4.7 or 5 at q4_k_m, it should be possible.

HackerNews