Fwiw: I didn't read the post carefully, this is just a passing by comment.
For my own use case I was trying to test consistency or an evaluation process and found that injecting a UUID into the system prompt (busting cache) made a material difference.
Without it, resubmitting the same inputs in close time intervals (e.g. 1, 5, or 30 min) would produce very consistent evaluations. Adding the UUID would decrease consistency (showing true evaluation consistency not artificially improved by catching) and highlight ambiguous evaluation criteria that was causing problems.
So I wonder how much prompt caching is a factor here. I think these LLM providers (all of them) are caching several layers beyond just tokenization.
Hmm. I'll think more about this.
It makes sense to me that a culture that values collectivistic cohesion would shy away from paradigm shifting ideas (disruption). I also see the correlation between disruptive ideas driven by principled critical thinking over conventional thinking.
I guess on some level my assumption is that they are adjacent. Those embedded in a collectivistic culture can think critically but can run into walls within a sandbox of convention. This is how they can be great at iterative improvement and engineering but struggle with paradigm shifting ideas.
I think you have a point, but there's definitely some nuance here I'm still untangling.