...

famouswaffles

6012

Karma

2023-02-15

Created

Recent Activity

  • The remakes are primarily a way to bank on the sustained relevance of some movies, not to keep them. Remakes of movies that are no longer relevant flop or heavily underperform and release to little fanfare.

  • >LLMs are most definitely (discrete-time) Markov chains in this sense: the variables take their values in the context vectors, and the distribution of the new context window depends only on what was sampled previously context.

    When 'what was previously sampled context' can be arbitrarily long and complex and be of arbitrary modality, that's not a markov chain. That's just being funny with words. By that logic, humans are also a markov chain.

  • It's an image. There's simply no way to add a watermark to an image that's both imperceptible to the user and non-trivial to remove. You'd have to pick one of those options.

  • >Yeah - you might want to check what you actually typed there.

    That's what you typed in your comment. Go check. I just figured it was intentional since surprise is the first thing you expect humans to show in response to it.

    >Not sure what you're trying to prove by doing it yourself though. Have you heard of random sampling? Never mind ...

    I guess you fancy yourself a genius who knows all about LLMs now, but sampling wouldn't matter here. Your whole point was that it happens because of a fundamental limitation on the part of LLMs that causes them unable to do it. Even one contrary response, never mind multiple would be enough. After all, some humans would simply say 'mat'.

    Anyway, it doesn't really matter. Completing 'mat' doesn't have anything to do with a lack of understanding. It's just the default 'assumption' that it's a completion that is being sought.

  • Claude and GPT both ask for clarification

    https://claude.ai/share/3e14f169-c35a-4eda-b933-e352661c92c2

    https://chatgpt.com/share/6919021c-9ef0-800e-b127-a6c1aa8d9f...

    >Of course if you know anything about LLMs you should realize that they are just input continuers, and any conversational skills comes from post training.

    No, they don't. Post-training makes things easier, more accessible and consistent but conversation skills are in pre-trained LLMs just fine. Append a small transcript to the start of the prompt and you would have the same effect.

    >I'm not sure why you regard this as a "gotcha" question. If you're expressing opinions on LLMs, then table stakes should be to have a basic understanding of LLMs - what they are internally, how they work, and how they are trained, etc.

    You proposed a distinction and explained a situation which would make that distinction falsifiable. And I simply told you LLMs don't respond the way you claim they would. Even when models respond mat (Now I think your original point had a typo?), it is clearly not due to a lack of understanding of what normal sentences are like.

    >If you find a description of LLMs as input-continuers in the least bit contentious then I'm sorry to say you completely fail to understand them - this is literally what they are trained to do. The only thing they are trained to do.

    They are predictors. If the training data is solely text then the output will be more text, but that need not be the case. Words can go in while Images or actions or audio may come out. In that sense, humans are also 'input continuers'.

HackerNews