Might be a person
There is a decent case for this thesis to hold true especially if we look at the shift in training regimes and benchmarking over the last 1-2 years. Frontier labs don't seem to really push pure size/capability anymore, it's an all in focus on agentic AI which is mainly complex post-training regimes.
There are good reasons why they don't or can't do simple param upscaling anymore, but still, it makes me bearish on AGI since it's a slow, but massive shift in goal setting.
In practice this still doesn't mean 50 % of white collar can't be automated though.
As someone who both experienced phases in life where no one approached me and phases were I get approached regularly, it's a mix of external signifiers and some internal woo stuff that people don't really understand conciously. Or said another way, when someone says you have to "look approachable" what they actually mean is that a) you have to present yourself externally in a way that makes people more likely to engage you (the aforementioned hair, clothes etc.) and b) you have to internally be open to the world (which is what dictates your body language in subtle ways that apparently get picked up). The issue is when someone says something like "have an open body language" is that it's impossible to 24/7 fake a certain type of body language, you actually have to believe it.
If you are naturally a distrusting person people will pick up on it, just how people will pick up if you're naturallly an open person. (The true trick is realizing that "naturally" can be changed)
Very interesting but also very speculative. I'm wondering how Trauma Release Exercises could be integrated into the framework, as it seems like it could also fall under the unlatching mechanism umbrella.
The overall idea of the body/muscles as an extension of memory feels experientally true, but I would love to see more empirical data on this.
One theory of how humans work is the so called predictive coding approach. Basically the theory assumes that human brains work similar to a kalman filter, that is, we have an internal model of the world that does a prediction of the world and then checks if the prediction is congruent with the observed changes in reality. Learning then comes down to minimizing the error between this internal model and the actual observations, this is sometimes called the free energy principle. Specifically when researchers are talking about world models they tend to refer to internal models that model the actual external world, that is they can predict what happens next based on input streams like vision.
Why is this idea of a world model helpful? Because it allows multiple interesting things, like predict what happens next, model counterfactuals (what would happen if I do X or don't do X) and many other things that tend to be needed for actual principled reasoning.
This project is an enhanced reader for Ycombinator Hacker News: https://news.ycombinator.com/.
The interface also allow to comment, post and interact with the original HN platform. Credentials are stored locally and are never sent to any server, you can check the source code here: https://github.com/GabrielePicco/hacker-news-rich.
For suggestions and features requests you can write me here: gabrielepicco.github.io