That is indeed a paper lol.
But in seriousness: not news and doesn’t change any of what I said. You have a class of 20 objects that they recall as they dream. Same setup (fMRI), small n, very very simplified design.
Look the reason we can’t do this is both physics AND information theoretic. You are getting in the best case an EXTREMELY reduced dimensionality, it’s not as though this is an early days of AI thing where it’s like “it’s not possible today but there’s nothing in principle stopping us from a Kurzweil like world”. It’s just not really possible.
Anyway the studies on this are restricted to specific neuroscience questions. Paper shows dreams contain object-like representations in the visual cortex — this is cool! And important! But it doesn’t imply anything for decoding thoughts and dreams.
I don’t know it’s fair to characterize the LLM community as being ignorant and rediscovering PSP/TCP. I in fact see that as programmers rediscovering survival analysis, and most LLM folks I know have learned these perspectives from that lens. Could be wrong about PSP, maybe things are more nuanced? But what is there that isn’t already covered by foundational statistics?
How about people that understand things are changing whether anyone likes it or not and want to stay relevant. What about the people who care about the end product and not rabbitholing design decisions on a proof of concept. What about someone who understands there is more nuance than assuming people with a different perspective on AI are lesser than or lower than people who resist the technology. You may feel you know the “right way” but to everyone else who is interested in operating in a world changing beneath our feet and not whining about the fact that everything will be different, and denigrating the people who want to succeed in it, this opinion is not exactly convincing. You want to cludge your way through a problem you’re welcome to but it’s not necessarily logical to suggest this is the only “right” way and infer that people who build with AI don’t like “understanding systems”.
When I build with AI I build things I never would have built before, and in doing so I’m exposed to technologies, designs, tools I wasn’t aware of before. I ask questions about them. Sure I don’t understand the tools as deeply as the person who wasted like 10 hours going down rabbit holes to answer a simple question, but I don’t really see that as particularly valuable.
I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days.