Former vulnerability researcher now dev in SF for a YC-alum company.
Do you know that to be true or are you speculating?
As we argue on the orange site, companies are paying Sierra AI to integrate voice and text agents into their systems to look up account information and process refunds. Fallbacks to human agents are built in to these systems.
We all hate phone trees because they never have the capability to handle exceptions to the most basic functions. We shout "speak to an agent!" into the phone because their website and phone trees only handle the happy path.
I think that the skill of hand-writing software is still useful in 2026. The vast majority of programming is a module calling another API. This does not spark joy. Truly interesting classes of problems —application of algorithms or applying complex arcane knowledge— are often not handled well by LLMs. Also, what the author wrote really strikes a chord. We should write the exceptionally difficult sections ourselves so we understand how the software operates.
This reminds me of the observation that Anthropic's unsupervised LLM-generated Rust implementation of sqlite3 was correct for the subset of features they chose, but thousands of times slower (wall clock). Of course, performance will be the next skill to be targeted by expert-led RHLF, but this is a hard problem with many tradeoffs. It may prove to be time-consuming to improve.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io