...

stevenhuang

1329

Karma

2013-08-30

Created

Recent Activity

  • The author is right about inlining but has picked the wrong example to show this since the compiler cannot inline across await.

    If this function is in the hot path the last thing you'll want to do is to needlessly call await. You'll enter a suspension point and your task can get migrated to another thread. It is in no way comparable to the dead simple inlining example given later.

    This is why you should always benchmark before making guesses, and to double check you're even benchmarking the right thing. In this case they used the findings from a nonasync benchmark and applied it to async. This will lead you to a very wrong conclusion, and performance issues.

  • Yeah the author makes a really poor example with the async case here.

    Async in rust is done via cooperative scheduling. If you call await you enter a potential suspension point. You're willingly telling the scheduler you're done running and giving another task a chance to run. Compound that with something like tokio's work stealing and now you'll possibly have your task migrated to run on a different thread.

    If this is in hot path making another call to await is probably the worst thing you can do lol.

    The author demonstrates later with a dead simple inlining example that the asm is equivalent. Wonder why he didn't try that with await ;)

  • The author is not wrong. You seem unaware of how nascent the field of LLM interpretability research is.

    See this thread and article from earlier today showing what we're still able to learn from these interpretability experiments.

    https://news.ycombinator.com/item?id=47322887

  • At least it's an easy way for those who don't know that they're talking about to out themselves.

    If they'd bother to see how modern neuroscience tries to explain human cognition they'd see it explained in terms that parallel modern ML. https://en.wikipedia.org/wiki/Predictive_coding

    We only have theories for what intelligence even means, I wouldn't be surprised there are more similarities than differences between human minds and LLMs, fundamentally (prediction and error minimization)

HackerNews