It's why as a retail investor, never buy things that would otherwise have not been available to you (but was to those "elite"/institutional investors previously).
Think pre-IPO buy-in. Investors in the know and other well connected institutional investors get first dibs on all of the good ones. The bad ones are pawned off to retail investors. It's no different with private credit and private equity. These sorts of deals have good ones and bad ones - the good ones will have been taken by the time it flows down to retail.
> observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.
and tbh, i cannot see any issues if this is how it is done - you just have to prove that the clean room ai has never been exposed to the source code of the app you're trying to clone.
> it's code that solves the problem in a way no human would choose
but is it better than than the way a human would choose? And does it matter?
A compiler may write assembly in a way that no humans would choose either. And in the early days of compilers, where most programmers would still hand-weave assembly, they would scoff at these generated assemblies as being bad.
Not to mention that in games like go, the "AI" choosing moves that no humans would choose meant it surpassed humans!
In other words, solving a problem "in a way humans would choose to" is distinct from just solving a problem, and imho, not always required at all.