I used it with Sonnet 4.0 a lot, and there was vastly more back-and-forth and correction of "dumb" things, such as forgetting to add "using" statements in C# files.
I don't know if it's model, or harness improvements, or inbuilt-memory or all of the above, but it often has a step where it'll check itself that is done now before trying to build and getting an inevitable failure.
Those small things add up to a much smoother and richer experience today compared to 6 months ago.
It's possible that it already is, given there are already signs of the US administration leaning on AI. Perhaps they're leaning a bit too heavily and getting the kind of confirmation / feedback they crave?
If they then feedback to the AI the outcomes of current actions, who knows where that'll lead next?
I've seen some code reviews go like,
"Why did you write this async void"
"Claude said so".
Is that so far from:
"Why did you use nukes?"
"ChatGPT said so".
It's entirely possible that humanity simply follows AI to their doom.
Does that make me an AI doomer?