...

Flashtoo

186

Karma

2018-08-20

Created

Recent Activity

  • Commented: "GPT-5.4"

    > Prompts with more than 272K input tokens are priced at 2x input and 1.5x output for the full session for standard, batch, and flex.

  • Then just post your opinions rather than the text the LLM dreamed around your opinions. Short posts and tweets tend to be well-liked on HN, there is no need to puff it up to a big blog post.

  • This is true beyond software. It used to be that the proof of the thinking process was in the resulting artifact. No longer can you estimate from the existence of a piece of text and the level of polish behind it that the apparent author has put at least a reasonable amount of thought into it. This applies to comments, blogs, emails, and most troublingly I've seen this happen at my job with things like requirement specs. Now, the veneer of quality makes it much harder to know what is the appropriate amount of skepticism to judge the contents with. And it's too tiring to be maximally skeptical about everything.

  • What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least.

HackerNews