...

solenoid0937

213

Karma

2024-01-11

Created

Recent Activity

  • Commented: "GPT-5.4"

    Weird, I have had the opposite experience. Codex is good at doing precisely what I tell it to do, Opus suggests well thought out plans even if it needs to push back to do it.

  • This is a total non issue if they hold their red lines, which they clearly intend to if you've read the memo.

    Also if you've ever actually chatted with anyone at the company you'd know that they are not all the same and Anthropic genuinely does stand apart here.

  • Sure the architecture is from 2017. But the gap between GPT-1 and frontier models today is not simply "more FLOPs" and as simple as "standing up PyTorch and vllm" - theres thousands of undocumented decisions about data, alignment, reward modeling, training stability, and inference-time strategies, and lots of tribal knowledge held by a small group of people who overwhelmingly do not want to work on weapons systems.

    The dense model argument is self-defeating long term. Sparsity (MoE etc.) lets you build a smarter model at the same compute budget, so going dense because you can afford to waste FLOPs is how you fall behind b/c you never came up with the step function improvements needed.

    Sure, the DoD invented HPC, but it also invented the internet, and then the private sector made it actually useful.

  • Non issue if they keep their red lines.

  • First, I have to say I loved your thoughtful & detailed comment. You have clearly considered this from the financial side; let me add some color from the perspective of someone working with frontier researchers.

    > As the "alignment" folks on the AI industry are likely to learn

    I will push back here. Dario & co are not starry-eyed naive idealists as implied. This is a calculated decision to maximize their goal (safe AGI/ASI.)

    You have the right philosophy on the balance sheet side of things, but what you're missing is that researchers are more valuable than any military spend or any datacenter.

    It does not matter how many hundreds of billions you have - if the 500-1000 top researchers don't want to work for you, you're fucked; and if they do, you will win because these are the people that come up with the step-change improvements in capability.

    There is no substitute for sheer IQ:

    - You can't buy it (god knows Zuck has tried, and failed to earn their respect).

    - You can't build it (yet.)

    - And collaboration amongst less intelligent people does not reliably achieve the requisite "Eureka" realizations.

    Had Anthropic gone forth with the DoD contract, they would have lost this top crowd, crippling the firm. On the other hand, by rejecting the contract, Anthropic's recruiting just got much easier (and OAI's much harder).

    Generally, the defense crowd have a somewhat inflated sense of self worth. Yes, there's a lot of money, but very few highly intelligent people want to work for them. (Almost no top talent wants to work for Palantir, despite the pay.) So, naturally:

    - If OpenAI becomes a glorified military contractor, they will bleed talent.

    - Top talent's low trust in the government means Manhattan Project-style collaborations are dead in the water.

    As such, AGI will likely emerge from a private enterprise effort that is not heavily militarized.

    Finally, the Anthropic restrictions will last, what, 2.5 more years? They are being locked out of a narrow subset of usecases (DoD contract work only - vendors can still use it for all other work - Hegseth's reading of SCR is incorrect) and have farmed massive reputation gains for both top talent and the next administration.

HackerNews