You're interacting with an LLM, so correctness is already out the window. So model-makers train LLMs to work better with MCP to increase correctness. So the only reason correctness is increased with MCP is because LLMs are specifically trained against it.
So why MCP? Are there other protocols that will provide more correctness when trained? Have we tried? Maybe a protocol that offers more compression of commands will overall take up more context, thus offering better correctness.
MCP seems arbitrary as a protocol, because it kinda is. It doesn't >>cause<< the increase in correctness in of itself, the fact that it >>is<< a protocol is the reason it may increase correctness. Thus, any other protocol would do the same thing.
> Cheap rental properties
They buy at most one.
> basic phone plans
They buy at most one.
> Cheap food, ramen noodles.
Humans have fixed calorie needs and even the very obese spend maybe 10x the norm, nothing more.
Compare that to the prospect of someone buying MULTIPLE properties, possibly multiple properties that are 10x more expensive the norm, the price disparity between a regular car and multiple supercars, the price disparity between owning a boat and/or yacht vs...not owning a boat.
> It's absolutely insane that Claude Code can spit out a week's worth of business automation tasks in half a day. And do it at relatively high quality in low-defect rate languages like Rust.
> Europe won't be able to catch that.
You think Europe won't be able to use Claude Code? If Claude Code is the one reaping the majority of the benefits of "spit[ing] out a week's worth of business automation tasks", then it's not worth much to the business. If Claude Code isn't the one reaping the majority of those benefits, then...Europe can use Claude Code too and reap the benefits for their business as well.