...

HarHarVeryFunny

5155

Karma

2017-07-29

Created

Recent Activity

  • Having the experience to build a frontier model is still a scare commodity, hence the salaries, but to advance AI you need new ideas and architectures which isn't what you are buying there.

    A human-level AI wouldn't help unless it also had the experience of these LLM whisperers, so how would it gain that knowledge (not in the training data)? Maybe a human would train it? Couldn't the human train another developer if that really was the bottleneck?

    People like Sholto Douglas have said that the actual bottleneck for development speed is compute, not people.

  • Humanoid robots are also a pipe dream until we have the brains to put into them. It's easy to build a slick looking shell and teleoperate it to dance on stage or serve drinks. The 1X company is actually selling a teleoperated "robot" (Neo), saying the software will come later !!

    As with AGI, if the bottleneck to doing anything is human level intelligence or physical prowess, then we already have plenty of humans.

    If you gave Musk, or any other AI CEO, an army of humans today, to you think that would accelerate his data center expansion (help him raise money, get power, get GPU chips)? Why would a robot army help? Are you imagining them running around laying bricks at twice the speed of a human? Is that the bottleneck?

  • The eye-watering salary you probably have in mind is for a manager at Meta, same company that just laid of 600 actual developers. Why just Meta, not other companies - because they are blaming poor LLama performance on the manager, it seems.

    Algorithmic efficiency improvements are being made all the time, and will only serve to reduce inference cost, which is already happening. This isn't going to accelerate AI advance. It just makes ChatGPT more profitable.

    Why would human level AGI help spin up chip fabs faster, when we already have actual humans who know how to spin them up, and the bottleneck is raising the billions of dollars to build them?

    All of these hard take-off fantasies seem to come down to: We get human-level AGI, then magic happens, and we get hard take-off. Why isn't the magic happening when we already have real live humans on the job?

  • We already have AGI - it's called humans - and frankly it's no magic bullet for AI progress.

    Meta just laid 600 of them off.

    All this talk of AGI, ASI, super-intelligence, and recursive self-improvement etc is just undefined masturbatory pipe dreams.

    For now it's all about LLMs and agents, and you will not see anything fundamentally new until this approach has been accepted as having reached the point of diminishing returns.

    The snake oil salesmen will soon tell you that they've cracked continual learning, but it'll just be memory, and still won't be the AI intern that learns on the job.

    Maybe in 5 years we'll see "AlphaThought" that does a better job of reasoning.

HackerNews