I thought they were reasonably interesting as well, though not quite the same vibe as the original.
Maybe it's that whole sense of wonder thing. When you have no idea why this thing was built and sent here, it's easy to imagine it was something exotic, amazing, high and mighty, wholesome, etc. When it's revealed that the reason was quite ordinary and kind of distasteful to modern human sensibilities, it's kind of a let-down.
> nothing actually passes the Turing test
Says who? I had already found this study, published almost a year ago, saying that they do: https://arxiv.org/abs/2503.23674
There doesn't seem to be a super-rigorous definition of the Turing Test, but I don't think it's reasonable to require it to fool an expert whose life depends on the correct choice. It already seems to be decently able to fool a person of average intelligence who has a basic knowledge of LLMs.
I agree that we don't really have AGI yet, but I'd hope we can come up with a better definition of what it is than "we'll know it when we see it". I think it is a legitimate point that we've moved the goalposts some.
That seems a bit contrived to me. Okay, that particular place is pretty deeply nested, but it's clearly a regular menu tucked away in there, with a option to show the menu bar. If you turn that on, then those options are half as deep. Or if you don't need to adjust those options, you don't go that deep.
The sibling comment, meanwhile, is complaining about extra space devoted to explicit controls for all of the extra options. Well, you can't have it both ways. If you want to have a lot of features and options, you have to either devote some space in the main UI to them, or have a lot of deeply nested menus like that.
Or I guess you could do a config file somewhere, but IMO that's even worse. If we're going to complain about bad UIs, isn't it even worse than some deeply nested menus to need to open a separate file somewhere else with a separate program and learn whatever config file syntax they happen to use.
The part that always struck me as weird about this stuff is that all of these "agents" with their "personas" are the same baseline LLMs with the same training ultimately, just told to basically pretend they're different. How far can that really get you?
I'm not actually a database engineer with 30 years of experience. If somebody demanded that I pretend to be one, I guess I'd give it a shot, but I would expect any actual employer would be able to tell that I don't have the level of knowledge and experience that you'd expect from somebody like that.
If the base LLM actually has the knowledge of all of these specialties, why can't it just apply them all at once, instead of needing to be told to I guess pretend to be only one of them.