...

sunaurus

1048

Karma

2016-01-29

Created

Recent Activity

  • I am pretty convinced that for most types of day to day work, any perceived improvements from the latest Claude models for example were total placebo. In blind tests and with normal tasks, people would probably have no idea if they're using Opus 4.5 or 4.6.

  • Maybe I'm misreading it, but I don't see him saying it's just the cost of *inference* alone (which is the strawman that the article in the OP is arguing against). He says:

    > this company is wilfully burning 200% to 3000% of each Pro or Max customer that interacts with Claude Code

    There is of course this meme that "Anthropic would be profitable today if they stopped training new models and only focused on inference", but people on HN are smart enough to understand that this is not realistic due to model drift, and also due to comeptition from other models. So training is forever a part of the cost of doing business, until we have some fundamental changes in the underlying technology.

    I can only interpret Ed Zitron as saying "the cost of doing business is 200% to 3000% of the price users are paying for their subscriptions", which sounds extremely plausible to me.

  • Surely that can't be true? The expectation would be that people pay $200 a month for building open source and personal hobby software with Claude?

  • If it remains just a token-predictor that can’t evolve, then I am not worried about it replacing humans.

  • I’m not worried about job loss as a result of being replaced by AI, because if we get AI that is actually better than humans - which I imagine must be AGI - then I don’t see why that AI would be interested in working for humans.

    I’m definitely worried about job loss as a result of the AI bubble bursting, though.

HackerNews