...

breuleux

1607

Karma

2013-07-14

Created

Recent Activity

  • I don’t tidy up very often, but when I do, it doesn’t take much time or energy. I just dump everything that isn’t version controlled into a junk folder, and it feels great.

  • It's generally easier to make such a process tamper-proof than an election. You can pick a cryptographically secure open source PRNG and determine the seed in a decentralized way by allowing anyone to contribute a salt into a list which is made public at the deciding moment. Then anyone can verify the integrity of the process by verifying the seed includes their contribution, and computing the candidates themselves.

  • > It's easier for a small number of people to coordinate, than a large number.

    That's basically my main argument for replacing election-based democracy by lottery-based democracy. Electing the right representatives is a coordination problem in and of itself, a process which the wealthy are already quite adept at manipulating, so we might as well cut the middle man and pick a random representative sample of the population instead, who can then coordinate properly.

  • If the government doesn't have enough power, the wealthy won't need to bribe politicians to do their bidding. They will do their own bidding directly, and there will be nobody to stop them.

    It's like, if you want to sell your cyanide penis pills under big government, you need to bribe someone. If you want to sell them under small government, you just... you just sell them, that's what.

    There may be ways to design a government where power is better distributed, e.g. using sortition, but ultimately it needs to be richer and more powerful than its wealthiest citizens, otherwise these wealthy citizens will assess, correctly, that when push comes to shove, the laws won't apply to them, and they do not need the government's permission to do what they want.

  • In the case of LLMs, "prediction" is overselling it somewhat. They are token sequence generators. Calling these sequences "predictions" vaguely corresponds to our own intent with respect to training these machines, because we use the value of the next token as a signal to either reinforce or get away from the current behavior. But there's nothing intrinsic in the inference math that says they are predictors, and we typically run inference with a high enough temperature that we don't actually generate the max likelihood tokens anyway.

    The whole terminology around these things is hopelessly confused.

HackerNews