>what if the LLM gets something wrong that the operator (a junior dev perhaps) doesn't even know it's wrong?
the same thing that always happens if a dev gets something wrong without even knowing it's wrong - either code review/QA catches it, or the user does, and a ticket is created
>if it fails here, it will fail with other things, in not such obvious ways.
is infallibility a realistic expectation of a software tool or its operator?
The only GUI products that work are GUIs that you can interface with, or that perform tasks for you
Maybe the real value of AI, particularly LLMs, is in the interface it provides for other things, and not in the AI itself
What if AI isn't the _thing_? What if it's the thing that gets us _to_ the thing?