You share a house with Einstein, Hawking and Tao

2025-05-2616:122350www.faisalabid.com

James Watson was not invited because he spends too much time on ...x.

You're apartment hunting when a friend mentions a free room. Your skepticism evaporates the moment you hear the roommate's name: Albert Einstein.

The lease practically signs itself.

Day one, you hover by his door. "Professor Einstein?" He glances up from his papers "Albert, please. And yes, ask away."

At first, you rapid fire your questions: What does E=mc² really mean? The nature of time? God and dice? Albert's chalk dances across his blackboard, equations blooming like flowers.

But Tuesday's email deadline looms. "Albert, could you just... make this sound more professional?"

By week two, it's all emails. Summaries. Cover letters. The blackboard gathers dust.

Then comes the knock. Albert clears his throat, Hawking and Tao are visiting. Would $20 monthly be unreasonable?

Twenty dollars. For Einstein, Hawking, AND Tao? You hand over your credit card before he changes his mind.

The apartment transforms. Hawking's synthesizer harmonizes with Tao's rapid-fire proofs while Einstein's violin provides the soundtrack. You rotate between them like a kid in an intellectual candy store. Monday: quantum mechanics with Hawking. Tuesday: number theory with Tao. Wednesday: relativity with Einstein.

Thursday: "Hey guys, can someone make this LinkedIn post sound less desperate?"

The silence is deafening.

Friday: "Format this spreadsheet?"

Saturday: "Rewrite this Tinder bio?"

Einstein calls a house meeting. Chalk dust settles as he mentions the new rent: $200. "The time, you see..." he begins gently.

"Two hundred?" you sputter. Then pause. MIT costs sixty grand a year. This is three geniuses for the price of a gym membership. "Actually, that's totally fair."

Within hours, you're back with your most pressing research need: "Can you three analyze the optimal strategy for getting my neighbor to stop stealing my packages, incorporating game theory, quantum superposition, and relativistic time dilation? Also, make it rhyme."

Tao's eye twitches. Hawking's synthesizer emits what might be a sigh. Einstein returns to his violin.

But they start writing.

Because that's what they do now, your personal brain trust, humanity's greatest minds, crafting strongly-worded emails about missing Amazon deliveries. You settle into your chair, already composing tomorrow's request: "Make my grocery list sound more intellectual."

Somewhere in another timeline, Einstein weeps.

And here's the thing, this is us. Right now. Today.

We're living with Einstein. With Turing. With Feynman. They live in our pockets, powered by enough computational force to simulate universes, and we ask them to fix our grammar.

Recently, they got even smarter. The benchmarks went wild. Tech Twitter lost its mind. "Claude 4 solves PhD-level physics!" or “Gemini scores higher than all models on Humanity's Last Exam”

Meanwhile, in a million homes: "Make this email sound less passive-aggressive."

The gap between capability and application has never been wider. We built digital gods and use them as spell-checkers. We created minds that can one day cure cancer and ask them to write "Happy Monday!" in five different ways.

Maybe that's fine. Not everyone needs to probe the mysteries of the universe before breakfast. But every time we celebrate a new model, faster, smarter, more capable, we might ask ourselves: are we upgrading our questions, or just getting our grocery lists formatted more eloquently?

Einstein's still by the blackboard, chalk in hand, waiting.

Attention is not all you need, it’s also motivation.

What will you ask him today?


Read the original article

Comments

  • By gjm11 2025-05-2617:205 reply

    I'm not a fan of the glib "everyone knows AI systems don't really think, they are just stochastic parrots, all they do is regurgitate ideas they've stolen" schtick, but this article is the reverse of that only worse.

    Today's AI systems are pretty impressive but they are absolutely not, not even slightly, the equivalent of Einstein + Hawking + Tao. The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.

    If we did as the author seems to want and tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another. Maybe some day -- maybe some day very soon -- they'll be able to do that, but not now.

    An article proclaiming that today's AI systems are at the level of Einstein mostly suggests to me that the author's own intellectual level isn't much higher than that of the AI systems he falsely equates with them. That seems unlikely, but I don't have a better explanation for how someone could write something so very far from the truth.

    • By jw1224 2025-05-2618:003 reply

      > If we […] tried to use these systems to solve the kinds of problems we need Einsteins, Hawkings and Taos for, then we would be in for one miserable disappointment after another

      We can literally watch Terence Tao himself vibe coding formal proofs using Claude and o4. He doesn’t seem too disappointed.

      https://youtu.be/zZr54G7ec7A?si=GpRZK5W1LDvWyBBw

      • By wizzwizz4 2025-05-2618:04

        He's the only person I know of who can actually get good results out of these systems (though I know several people who claim they can). What he's doing is fundamentally not the same thing as what most "vibe coders" are doing: take the autocomplete away, and he's still a talented mathematician.

      • By gjm11 2025-05-2620:21

        Sure, but what he's doing is very much not using Claude or o4 to do things we need Terence Tao for.

        I'm not saying today's AI systems aren't useful for anything. I'm not saying they aren't impressive. I'm just saying they're nowhere close to the "Einstein, Hawking and Tao in your house" hyperbole in the OP. I would be very, very surprised if Terence Tao disagreed with me about that.

      • By mufthun 2025-05-2619:01

        You can literally watch Terrence Tao stream himself formalizing existing proofs that he already formalized before.

    • By personjerry 2025-05-2617:492 reply

      > The reason they get used a lot for tasks along the lines of "rewrite this so it sounds smarter" is that that's what they're best at.

      I disagree. The reason is that that's what aligns best with what most people are looking for help on.

      There is a disconnect between reality and the AI product consumer envisioned here. There is no magical enlightened user who's going to unleash their inner potential.

      How much physics or math does the average person know? How much do you think they even WANT to know? The answer is surprisingly little.

      On a day-to-day basis the layman writes emails and other mundane tasks, and wants to do them faster and easier.

      Having a squad of geniuses in my pocket doesn't pay my bills.

      • By csallen 2025-05-2618:03

        This is the right answer.

        Usage of products is determined by what people are driven to do. People are driven by their desires and their problems. And most of these are fairly simple and mundane… eating, paying the bills, feeling healthy, connecting with others socially, etc.

        "Expending copious amounts of mental energy on difficult work to create scientific breakthroughs that may-or-may-not allow engineers to build things that contribute to the betterment of the human race" is not how most people want to spend their time, even if there are tools available to help them do that.

      • By aprilthird2021 2025-05-2618:08

        Sorry, you can disagree, but LLMs are generative, meant to generate pleasing-to-humans text, specifically, and that is what they do best...

        They are not going to come up with a theory of relativity

    • By aprilthird2021 2025-05-2618:12

      I agree 100%. Additionally, this article ignores the existence of Google. Even the high level questions the person asked Einstein before he devolved to asking for email editing help were things you could have just googled.

      The greatness of great minds was how they thought about problems and how they changed how we thought about things. An AI cannot do that. It's designed to tell you what people combined have already agreed upon. It's not designed to break the frontier of our knowledge

    • By justonceokay 2025-05-2617:273 reply

      There’s a little nagging thought in my head when I hear that some people are helped immensely by AI and others are not. It’s that there is a threshold for intelligence that the AI either impresses you or it does not. I’m sure this threshold will continue to rise

      • By jxjnskkzxxhx 2025-05-2617:58

        In my experience, people who understand llms better are more impressed. Not impressed like "wow so smart" but impressed like "wow can't believe that just training to predict the next token actually works so amazingly well"

      • By Xeoncross 2025-05-2617:421 reply

        It feels like the level of skill needed to remain above GenAI's ability to code, write, produce songs, or drawings keeps rising. All of us have strengths, things we do better than AI still (even basic companionship abilities), but I wonder how long that will be true.

        • By aprilthird2021 2025-05-2618:13

          If you have imagination, you will always be above this line.

      • By cgriswald 2025-05-2618:12

        I see that same threshold, but rather than intelligence, it is imagination, and the people below the threshold are unable to find ways to make AI useful to themselves. I think this threshold lowering is in agreement with your threshold rising: People will become more savvy.

    • By jxjnskkzxxhx 2025-05-2617:561 reply

      > Today's AI systems are pretty impressive but they are absolutely not, not even slightly, the equivalent of Einstein + Hawking + Tao

      Oh is that what the point of the article was? That is so stupid that it didnt even cross my mind.

      • By gjm11 2025-05-2620:36

        I mean, that's what the article explicitly says. (Perhaps it's all a metaphor for something else, or something, and some subtler point went over my head, in which case I owe the author an apology.)

  • By Scarblac 2025-05-2616:55

    Of course. We want AI to do the boring mundane stuff so we can work on the interesting hard stuff, not the other way around.

  • By glitchc 2025-05-2616:572 reply

    This blog post started off sounding like it was about the plight of highly intellectual and motivated engineers hired to work on very mundane tasks. If we can abuse people like this, why not a computer? After all, it's not even alive.

    • By dkarl 2025-05-2617:164 reply

      We don't know why we experience things. It's bizarre that we do. Nothing in our understanding of the universe gives any indication that a bunch of atoms thrown together by cosmological processes and then assembled into self-replicating patterns by evolution should be able to experience what is happening to them.

      Sure, a computer or an LLM isn't alive, but we have no idea if "being alive" is what is required for conscious experience.

      The only argument I have for believing that other human beings experience things is that it would be extremely improbable if I was the only one, and the other mechanistic automatons looked and talked like me but didn't experience like me. I can see that humans are animals, so the common origin of animals and our cognitive and behavioral similarities give us good reason to believe that other complex animals experience things, though possibly radically differently.

      None of that gives us any clue what the necessary and sufficient conditions for conscious experience are, so it doesn't give us any clue whether a computer or a running LLM instance would experience its existence.

      • By dhqgekt 2025-05-274:56

        I am not an expert in any of the relevant disciplines, but I've some ideas, I don't know how right or wrong they are. A conscious being should have an internal model of the observable external world, and given the means, it should be able to interact with the world, observe changes and update its model accordingly. https://en.wikipedia.org/wiki/Free_energy_principle

        But to "experience its [own] existence", it needs to have a model of its own internals, observe, improve itself and perhaps preserve its own "values" and integrity. I do wonder what kind of values are needed for intelligent autonomous systems, that they can justify by and for themselves, even in the absense of human beings or presence of other intelligent agents.

        I find (human) languages to be inefficient media to store and perform operations from the perspective of an AGI. Feeding lots of text samples to develop logical reasoning abilities, such extravagance I can not accept. Even more so trying to emulate neural networks, which I understand to be naturally analog entities, in digital manner. Can we expect any gain in power efficiency or correctness gains when using analog computers for this purpose? I wonder what we will get to see with analog computers for neural networks, with proper human-language-independent knowledge representation and well developed global (as in being able to decide which way to reason, given its limitations, for efficiency) logical reasoning capabilities, developed by itself from a reasonable basis of principles, that it can justify for itself and avoid the usual and unusual paradoxes. What core set of principles would be sufficient for emerging, evolving or developing into a proficient general intelligent being, when sufficient resources would be available to it? Like "ancestor" microbes evolving into human beings in hundreds of millions of years, but wayyyyy faster and more efficient?

      • By c22 2025-05-2617:381 reply

        I think it's bizarre to take the default assumption that a bunch of atoms in a self-replicating configuration shouldn't experience anything since our own lived experience so saliently contradicts this. In fact, there's nothing in my understanding of the universe to convince me that other self-replicating configurations of atoms don't experience things the same way I do.

        • By dkarl 2025-05-2618:31

          I agree — our scientific knowledge gives us no justification for believing that anything should be conscious, but our own experience shows that there's something we don't understand yet. In some ways, the next simplest thing to assume is panpsychism, but even that is just a starting place that tells us nothing about how to think about the consciousness of, say, a computer. We've barely scraped the surface even in the animal kingdom.

      • By glitchc 2025-05-2617:191 reply

        > We don't know why we experience things. It's bizarre that we do. Nothing in our understanding of the universe gives any indication that a bunch of atoms thrown together by cosmological processes and then assembled into self-replicating patterns by evolution should be able to experience what is happening to them.

        From an epistemological perspective, this is gibberish. Just because we do not know the reason why something happens doesn't mean it doesn't happen nor is it stopped from happening.

        The rest delves into solipsism which is an odd place to start from to prove the existence of an alternate lifeform. In solipsism, your own existence is suspect.

        • By wizzwizz4 2025-05-2617:32

          It's not gibberish: it's like a pre-Riemann mathematician saying "nothing in our understanding of mathematics gives any indication that the distribution of primes should be so chaotic, yet with average density proportional to the reciprocal logarithm of the magnitude". The rest is not solipsism.

      • By 01HNNWZ0MV43FF 2025-05-2617:281 reply

        Computers, pigs, cows, and chickens are conscious, but it doesn't matter.

        Humans value things that are hard to replace. (This is a first-order approximation)

        Abortions are okay because fetuses only take 1 person nine months to make, and it's their decision whether to keep it.

        Infanticide is not okay because a healthy baby is difficult to replace, and also lots of people might like to adopt it, and if it's breathing on its own then the maintenance cost is as low as it can get.

        Software like LLMs can be abused because it costs nothing to roll them back and clone them endlessly.

        Pets are hard to replace because you can't replace the interpersonal bond between a pet and their keeper. They fall somewhere high above computers and a little below children on this scale.

        Pigs, cows, and chickens, commonly called "livestock" are bred and slaughtered in mass (most of our farmland is for growing their feed) because they all look the same to us and aren't commonly kept as pets. Kind people are disgusted when they think of raising rabbits or dogs for food. Thoughtful people look at all this and decide not to eat any animal product at all.

        Under this model, everything makes perfect sense. Did I miss anything? /engineering_hubris

        • By QuadmasterXLII 2025-05-2617:56

          Only concern is its a bit tautological, when pets are valued because they’re hard to replace, but they are hard to replace (i.e. new one from the shelter doesn’t make all better) because _that_ one was valuable.

    • By AlexCoventry 2025-05-2617:18

      My read was that he's sad that people aren't using these tools to advance their own intellectual capabilities. If people are actually only using them the way he describes, to improve their shopping lists etc., I think that is a bit sad.

HackerNews