In your experience, can you take the tech debt riddled code, and ask claude to come up with an entirely new version that fixes the tech debt/design issues you've identified? Presumably there's a set of tests that you'd keep the same, but you could leverage the power of ai in greenfield scenarios to just do a rewrite (while letting it see the old code). I dont know how well this would work, i havn't got to the heavy tech debt stage in any of my projects as I do mostly prototyping. I'd be interested in others thoughts.
generally there is a "temperature" parameter that can be used to add some randomness or variety to the LLMs outputs by changing the likelihood of the next word being selected. This means you could just keep regenerating the same response and get different answers each time. each time it will give different plausible responses, and this is all from the same model. This doesn't mean it believes any of them, it just keeps hallucinating likely text, some of which will fit better than others. It is still very much the same brain (or set of trained parameters) playing with itself.
I have been using search for engines for 30 years, my queries are not vague, i put as many keywords and "inurl"s and whatnot in as i can manage. I dont use kagi blocklists. Google results for my specific queries are garbage. I am much happier with kagi. If you are happy with google, thats fine too. Perhaps we are just in different bubbles and mine are not well served by google.