https://github.com/jftuga
I wonder how this compares to my M4 air with 10 GPU cores and 32 MB of RAM. My system can only run ~14B sized models at any reasonable speed. The accuracy of these sized models can be underwhelming. I am looking forward to a time when it would be nice to run models locally at a reasonable price, at a reasonable speed and with reasonable accuracy. I don't think we are there just yet.
I am researching go-string-concat-benchmark [1]:
A performance comparison of four common Go string building methods.
___I recently updated my go-stats-calculator to include many more stats [2]:
CLI tool for computing statistics (mean, median, variance, std-dev, skewness, etc.) from files or standard input.
___I also created claude-image-renamer [3]:
AI-powered image renaming script that generates descriptive filenames for screenshots.
___[1] https://github.com/jftuga/go-string-concat-benchmark
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io