...

Verdex

2434

Karma

2014-08-28

Created

Recent Activity

  • In my mind Google is the one AI provider which is more or less guaranteed to make it past the next 2-5 years. Maybe anthropic and openai can be profitable with current model. But they'll never get to stop investing in next model while Google is there with infinite money.

    So either scaling stops hard here pretty soon so that spending can stabilize or else the investors are going to be showing up asking for several pounds of flesh.

    Suddenly 'dont get left behind, this is the worst these models will ever be' sounds a lot more like 'get locked in with a hyper giant that'll destroy your livelihood and not notice'.

    Although who knows, maybe local models will be a thing (however when your dev team gets banned with no explanation and the next milestone is coming up somehow I don't expect that transition to go sufficiently quick).

  • There's code structure but then there's also code philosophy.

    The worst code bases I have to deal with have either no philosophy or a dozen competing and incompatible philosophies.

    The best are (obviously) written in my battle tested and ultra refined philosophy developed over the last ~25 years.

    But I'm perfectly happy to be working in code bases written even with philosophies that I violently disagree with. Just as long as the singular (or at least compatible) philosophy has a certain maturity and consistency to it.

  • Yeah, even the AI CEOs are admitting that training scaling is over. They claim that we can keep the party going with post training scaling, which I personally find hard to believe but I'm not really up to speed on those techs.

    I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread.

    [Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.]

  • I think many serious endeavors would benefit from including a magician.

    Intelligent experts fail time and again because while they are experts, they don't know a lot about lying to people.

    The magician is an expert in lying to people and directing their attention to where they want it and away from where they don't.

    If you have an expert telling you, "wow this is really amazing, I can't believe that they solved this impossible technical problem," then maybe get a magician in the room to see what they think about it before buying the hype.

  • So, I kind of get this sentiment. There is a lot of goal post moving going on. "The AIs will never do this." "Hey they're doing that thing." "Well, they'll never do this other thing."

    Ultimately I suspect that we've not really thought that hard about what cognition and problem solving actually are. Perhaps it's because when we do we see that the hyper majority of our time is just taking up space with little pockets of real work sprinkled in. If we're realistic then we can't justify ourselves to the money people. Or maybe it's just a hard problem with no benefit in solving. Regardless the easy way out is to just move the posts.

    The natural response to that, I feel, is to point out that, hey, wouldn't people also fail in this way.

    But I think this is wrong. At least it's wrong for the software engineer. Why would I automate something that fails like a person? And in this scenario, are we saying that automating an unethical bot is acceptable? Let's just stick with unethical people, thank you very much.

HackerNews