senior software engineer in the USA
I tested out Atlassian Rovo last year. I tried to get it to list all of the Confluence articles I had written in 2025 so I could use that information for my performance review. It found three, regardless of how I queried it. I had actually written over sixty. I tried, but never did found a good use case for it. Too unreliable.
"AI is actually pretty good at writing tests, especially for common scenarios and edge cases. The tricky part is deciding what to do when tests fail. Sometimes the code is wrong. Other times the spec itself has evolved and the test needs updating."
Yes, I agree, it is good at coming up with lots of scenarios. But after switching to AI-generated unit tests, I discovered that AI writes tests that mirror the code, not validate that the implementation is correct.
So I have AI write the unit test with a particular pattern in the method name:
<methodName>_when<Conditions>_<expectedBehavior>
Then I have a Claude skill that validates that the method under test matches the first part of the method name, that the setup matches the conditions in the middle part, and that the assertions match the expected behavior in the last part. It does find problems with the unit tests this way. I also have it research whether the production code is wrong or whether the test is wrong too - no blindly having AI "fix" things.
For more complex methods, though, I still do manual verification by checking what lines get hit for each test.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io