A thing people miss is that there are many different right ways to solve a problem. A legacy system might need the compatibility or it might be a greenfield. If you leave a technical requirement out of the prompt you are letting the LLM decide. Maybe that will agree with your nuanced view of things, but maybe not.
We're not yet at a point where LLM coders will learn all your idiosyncrasies automatically, but those feedback loops are well within our technical ability. LLM's are roughly a knowledgeable but naïve junior dev; you must train them!
Hint: add that requirement to your system/app prompt and be done with it.
A big challenge is that programmers all have unique ever changing personal style and vision that they've never had to communicate before. As well they generally "bikeshed" and add undefined unrequested requirements, because you know someday we might need to support 10000x more users than we have. This is all well and good when the programmer implements something themselves but falls apart when it must be communicated to an LLM. Most projects/systems/orgs don't have the necessary level of detail in their documentation, documentation is fragmented across git/jira/confluence/etc/etc/etc., and it's a hodge podge of technologies without a semblance of consistency.
I think we'll find that over the next few years the first really big win will be AI tearing down the mountain of tech & documentation debt. Bringing efficiency to corporate knowledge is likely a key element to AI working within them.
Wouldn't the fact that another group researched ABCA1 validate that the assistant did find a reasonable topic to research?
Ultimately we want effective treatments but the goal of the assistant isn't to perfectly predict solutions. Rather it's to reduce the overall cost and time to a solution through automation.