The "predict the next word" to a current llm is at the same level as a "transistor" (or gate) is to a modern cpu. I don't understand llms enough to expand on that comparison, but I can see how having layers above that feed the layers below to "predict the next word" and use the output to modify the input leading to what we see today. It is turtles all the way down.
> Trying to reduce China v America to engineers vs lawyers is so reductive it's just mind blowing this keeps getting repeated.
Think of it as engineers vs non-engineers (lawyers/mba types/etc). We complain about that on here all the time (ex. boeing). It's where the priorities are: is it on making things better or making more money? In an ideal world, it would be both. Unfortunately here, it is not otherwise enshittification would not be a thing.