> I agree. But what I'm trying to say is that we'll soon have automated agents that look for vulnerabilities, in agentic flows, ready to be plugged into ci/cd pipelines.
We already have that, and we can see it doesn't perform very well.
An agent that has no reasoning ability will not generate better code than what it was trained on.
https://garymarcus.substack.com/p/llms-dont-do-formal-reason...
> One of the more exciting aspects of LLM-aided development for me is the potential for high quality software
There is no evidence to suggest this is true.
LLMs are trained on poor code quality and as a result, output poor code quality.
In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
LLMs are great, but the potential for high quality software is not one of the selling points.