...

energy123

1679

Karma

2024-06-21

Created

Recent Activity

  • > maybe we don't have a full grasp of the implications of AI. Maybe it's actually going to be rather benign and more boring than expected

    Part of the concern is there's no one "AI". There is frontier that keeps advancing. So "it" (the AI frontier in the year 2036) probably will be benign, but that "it" will advance and change. Then the law of large numbers is working against you, as you keep rolling the dice and hoping it's not a 1 each time. The dice rolls aren't i.i.d., of course, but they're probably not as correlated as we would like, and that's a problem as we keep rolling the dice. The analogy would be nuclear weapons. They won't get used in the next 10 years most likely, but on a 200 year time-frame it's a big deal as far as species-level risks go, which is what they're talking about here.

  • Every person says this too, but it ignores the diversity in types of people. I know somehow who happily watches ads and makes purchasing decisions off it. I ignore them and do not. I don't believe I am being manipulated by the ads. The companies choose to advertise to target other people, and they lose money serving ads to people like me. But it's still a net win for them.

  • The rule of thumb "LLMs are good at reducing text, not expanding it" is a good one here.

  • I'm on the optimistic side with how useful LLMs are, but I have to agree. You cultivate the instinct for how to steer the models and reduce hallucinations, but you're not building articulable knowledge or engaging in challenging thinking. It's more learning muscle-memory reactions to certain forms of LLM output that lean you towards trusting the output more, trying another prompting strategy, clearing context or not, and so on.

    To the extent we can call it skill, it's probably going to be made redundant in a few years as the models get better. It gives me a kind of listlessness that assembly line workers would feel.

  • Alleles are shared with kin which gives rise to multilevel selection effects in rare but real scenarios.

HackerNews