https://github.com/JBAnderson5
Multiple times I’ve rejected an llm’s file changes and asked it to do something different or even just not make the change. It almost always tries to make the same file edit again. I’ve noticed if I make user edits on top of its changes it will often try to revert my changes.
I’ve found the best thing to do is switch back to plan mode to refocus the conversation
> Now consider what the same analyst does with an LLM agent: "Show me all software companies with over $1B market cap, P/E under 30, and revenue growing over 20% year over year. Build a DCF model for the top 5. Run sensitivity analysis on discount rate and terminal growth."
While I think LLMs can improve the interface and help users learn/generate domain specific languages, I don’t see how a professional can trust an llm to get a technical request like this correct without verification. Wouldn’t a financial professional trust the Bloomberg llm agent that translates their request into a set of Bloomberg commands more?
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io