email: coppsilgold@protonmail.com
The author seems obsessed with RaptorQ[1], this is not a good place for it.
RS over GF256 is more than adequate. Or just plain LDPC.
At the moment LLM's tend to work well when you constrain them, and you can craft the constraints with the help of the same LLM in a different session. Then you can verify if the outputted code obeys the constraints in yet another session, and make it adjust the code to obey the constraints. If one of the constraints was to yield highly functional code, you can start refining function by function as well. There is a pattern here.
If you are a good engineer you can dictate data structures to it too. It then performs even better.
I believe the writing is on the wall a this point, it does a very adequate job if I invest enough time in writing and refining the specs and give it the data structures (&/| database schemas) I want it to use. And there is no comparison in the number of hours I spend wrangling it and the number of the hours it would take me to do the code myself.
This is the worst it's going to be and it's already quite good, it wasn't that good a mere three months ago.
The main pitfall is trying to get an LLM to read your mind, in doing so you are putting too much load on whatever passes for their intelligence quotient. That isn't how you get good results or get a good measure of their capabilities.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io