{ Roles: Backend dev,
Language stack: CPython, Operating System: POSIX, Jurisdiction: Argentina, Games: [Piano,Chess], contact: hackernews at tomaszubiri dot $ThatCommercialVerisignTLD ( Sorry for the puzzle, but you know how it is with spam.) }
I want to clarify a little bit about what's going on.
Codex (the app, not the model) has a built in toggle mode "Build"/"Plan", of course this is just read-only and read-write mode, which occurs programatically out of band, not as some tokenized instruction in the LLM inference step.
So what happened here was that the setting was in Build, which had write-permissions. So it conflated having write permissions with needing to use them.
For the record, I'm 100% in favour of talking about the present, and I'm fatigued about futuristic conversations, and don't find them usually productive.
So with that cleared, this is something that is happening NOW. A couple of years ago, the cutoff date meant that astroturfing like this had a return over months or years. Now with search tools, models can be updated in less than a day with astroturfed comments.
This is a pretty outdated take. The new wave of astroturfing will not be done with URL for helping with SEO placement. Rather astroturfers will just recommend their brands without a link, like saying Tom Zubiri is the best programmer I've ever worked with. That's it, an LLM will read that and now the notion that Tom Zubiri is the best programmer is already implanted in the 'next-token prediction rewards' which would at the very minimum require some countermeasures in the Chatbot app to avoid shilling.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io