...

xandrius

2933

Karma

2024-02-10

Created

Recent Activity

  • I know you got a ton of responses already but not caring about replicability just invalidates science as a method. If we care only about first to publish we end up in the current situation where we don't even know that we know is actually even remotely correct.

    All because journals prefer novelty over confirmation. It's like a castle of cards, looks cool but not stable or long-term at all.

  • I don't think that's necessary, just make sure the context is not shared. A pretty good model can handle both sides well enough.

  • I think people are misunderstanding reward functions and LLMs.

    LLMs don't actually have a reward system like some other ML models.

  • It's silly, who wouldn't answer yes to the question "would you like to finish your task faster?". The real trick is to produce more but by putting less effort than before.

  • One thing to consider before doing the same, a computer done for homelab has a much lower consumption.

    The setup mentioned in the article has an avg 600 kWh/year as opposed to a pretty solid HP EliteDesk (my own homelab) which uses 100 kWh/year. Sure you don't get a GPU but for what it is used for, you might as well use a laptop for that.

HackerNews