...

tommy_axle

79

Karma

2023-04-25

Created

Recent Activity

  • I'm guessing this is also calculating based on the full context size that the model supports but depending on your use case it will be misleading. Even on a small consumer card with Qwen 3 30B-A3B you probably don't need 128K context depending on what you're doing so a smaller context and some tensor overrides will help. llama.cpp's llama-fit-params is helpful in those cases.

  • More like redux vs zustand. Picking zustand was one of the good standout picks for me.

  • Commented: "Vim 9.2"

    With all the buzz about orchestrating in the age of CLI agents there doesn't seem to be much talk about vim + tmux with send-keys (a blessing). You can run as many windows and panes doing so many different things across multiple projects.

  • 4 points2 commentswww.hollywoodreporter.com

    A stunning viral video of Cruise vs. Pitt has 'Deadpool & Wolverine' screenwriter warning "Hollywood is about to be revolutionized/decimated," as MPA calls for the company to cease its "infringing…

  • If doing it directly fails (not surprising) wouldn't the next thing (maybe the first thing) to do was to have AI write a codemod to do what needed to be done then apply the codemod? Then all you need to do is get the codemod right and apply it to as many files as you need. Seems much more predictable and context-efficient.

HackerNews