> Several commenters suggested the original essay was written by an LLM. They were half right. Both that essay and this one were written with Claude as a drafting partner. I directed the argument; the LLM helped with prose.
That’s all well and good, but I think he needs to take a closer look at some of the resulting prose and clarify a little more. Most of it is good, but there are some unclear statements, like this (right after his descriptions of “Camp A” and “Camp B”):
> Sutton says Camp B wins. My essay was filed under Camp A. But decision theory belongs to neither camp.
The second sentence quoted above doesn’t specify, but I’m pretty sure it means that it was filed under Camp A by the commenters, and incorrectly at that. If so, it would probably read better as:
> Sutton says Camp B wins. Commenters seemed to file my essay under Camp A, and then dismissed it. But that’s incorrect; decision theory belongs to neither camp.
Or something along those lines.
I honestly think this isn’t nit-picky feedback, either. This is a crucial set of sentences which appear to lay out the main point of the essay, so it’s vitally important that they be clear … who “filed” it as a particular camp, and was that correct or incorrect? It should be revised to convey that, as well as better connecting that to what incorrect conclusions might have been drawn from that. The information can be gleaned from the surrounding context of course, but I found that crucial sentence to throw off the flow what was otherwise a really great essay.
> But, the pro-legalization folks would argue patently crazy things: it cures cancer, the smoke isn't bad for you at all, there are no downsides! etc.
Who seriously claimed that it “cures cancer”? There have been some claims that it helps alleviate nausea associated with chemotherapy, which is quite reasonable and will likely be proved out by evidence over time.
Really … who genuinely claimed it “cures” cancer?
Lo and behold, here’s a concrete example I stumbled across just a few seconds after opening Reddit again (really gotta stop doing that):
The worst is Reddit these days.
I pretty much never even went there for technical topics at all, just funny memes and such, but one day recently I started seeing crazy AI hype stories getting posted, and sadly I made a huge mistake and I clicked on one once, and now it’s all I get.
Endless posts from subs like r/agi, r/singularity, as well as the various product specific subs (for Claude, OpenAI, etc). These aren’t even links to external articles, these are supposedly personal accounts of someone being blown away by what the latest release of this or that model or tool can do. Every single one of these posts boils down to some irritating “game over for software engineers” hype fest, sometimes with skeptical comments calling out the clearly AI-generated text and overblown claims, sometimes not. Usually comments pointing out flaws in whatever’s being hyped are just dismissed with a hand wave about how the flaw may have been true at one time, but the latest and greatest version has no such flaws and is truly miraculous, even if it’s just a minor update for that week. It’s always the same pattern.
There’s clearly a lot of astroturfing going on.